var/home/core/zuul-output/0000755000175000017500000000000015117020554014525 5ustar corecorevar/home/core/zuul-output/logs/0000755000175000017500000000000015117024137015472 5ustar corecorevar/home/core/zuul-output/logs/kubelet.log0000644000000000000000001457316015117024130017676 0ustar rootrootDec 12 14:10:47 crc systemd[1]: Starting Kubernetes Kubelet... Dec 12 14:10:47 crc kubenswrapper[5108]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 14:10:47 crc kubenswrapper[5108]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Dec 12 14:10:47 crc kubenswrapper[5108]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 14:10:47 crc kubenswrapper[5108]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 14:10:47 crc kubenswrapper[5108]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 12 14:10:47 crc kubenswrapper[5108]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.236987 5108 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.240659 5108 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.240685 5108 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.240691 5108 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.240697 5108 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.240703 5108 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.240708 5108 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.240713 5108 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.240721 5108 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.240727 5108 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.240733 5108 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.240738 5108 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.240745 5108 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.240751 5108 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.240757 5108 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.240762 5108 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.240767 5108 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.240772 5108 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.240776 5108 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.240781 5108 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.240786 5108 feature_gate.go:328] unrecognized feature gate: PinnedImages Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.240791 5108 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.240796 5108 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.240800 5108 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.240805 5108 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.240810 5108 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.240815 5108 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.240820 5108 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.240825 5108 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.240829 5108 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.240834 5108 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.240848 5108 feature_gate.go:328] unrecognized feature gate: Example2 Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.240853 5108 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.240858 5108 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.240862 5108 feature_gate.go:328] unrecognized feature gate: DualReplica Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.240868 5108 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.240872 5108 feature_gate.go:328] unrecognized feature gate: NewOLM Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.240877 5108 feature_gate.go:328] unrecognized feature gate: SignatureStores Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.240882 5108 feature_gate.go:328] unrecognized feature gate: OVNObservability Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.240887 5108 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.240895 5108 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.240900 5108 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.240905 5108 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.240910 5108 feature_gate.go:328] unrecognized feature gate: Example Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.240915 5108 feature_gate.go:328] unrecognized feature gate: InsightsConfig Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.240920 5108 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.240925 5108 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.240930 5108 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.240935 5108 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.240940 5108 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.240945 5108 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.240949 5108 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.240955 5108 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.240959 5108 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.240964 5108 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.240968 5108 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.240973 5108 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.240978 5108 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.240982 5108 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.240987 5108 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.240993 5108 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.240998 5108 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.241003 5108 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.241008 5108 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.241022 5108 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.241027 5108 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.241032 5108 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.241038 5108 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.241044 5108 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.241049 5108 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.241056 5108 feature_gate.go:328] unrecognized feature gate: GatewayAPI Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.241062 5108 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.241105 5108 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.241113 5108 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.241119 5108 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.241124 5108 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.241128 5108 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.241133 5108 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.241138 5108 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.241143 5108 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.241147 5108 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.241152 5108 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.241157 5108 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.241162 5108 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.241166 5108 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.241171 5108 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.241175 5108 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.241740 5108 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.241751 5108 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.241757 5108 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.241762 5108 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.241767 5108 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.241772 5108 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.241777 5108 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.241782 5108 feature_gate.go:328] unrecognized feature gate: NewOLM Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.241786 5108 feature_gate.go:328] unrecognized feature gate: SignatureStores Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.241792 5108 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.241798 5108 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.241802 5108 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.241808 5108 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.241812 5108 feature_gate.go:328] unrecognized feature gate: GatewayAPI Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.241817 5108 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.241826 5108 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.241831 5108 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.241836 5108 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.241841 5108 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.241845 5108 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.241850 5108 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.241855 5108 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.241860 5108 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.241864 5108 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.241871 5108 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.241877 5108 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.241882 5108 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.241886 5108 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.241892 5108 feature_gate.go:328] unrecognized feature gate: InsightsConfig Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.241896 5108 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.241901 5108 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.241906 5108 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.241910 5108 feature_gate.go:328] unrecognized feature gate: OVNObservability Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.241916 5108 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.241920 5108 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.241925 5108 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.241930 5108 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.241934 5108 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.241939 5108 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.241944 5108 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.241949 5108 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.241953 5108 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.241959 5108 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.241964 5108 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.241969 5108 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.241974 5108 feature_gate.go:328] unrecognized feature gate: Example Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.241979 5108 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.241986 5108 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.241990 5108 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.241995 5108 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.242000 5108 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.242005 5108 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.242009 5108 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.242015 5108 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.242019 5108 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.242024 5108 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.242029 5108 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.242036 5108 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.242042 5108 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.242048 5108 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.242054 5108 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.242059 5108 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.242064 5108 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.242069 5108 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.242100 5108 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.242106 5108 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.242110 5108 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.242115 5108 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.242119 5108 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.242124 5108 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.242131 5108 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.242135 5108 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.242140 5108 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.242145 5108 feature_gate.go:328] unrecognized feature gate: PinnedImages Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.242150 5108 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.242180 5108 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.242185 5108 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.242189 5108 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.242194 5108 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.242202 5108 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.242207 5108 feature_gate.go:328] unrecognized feature gate: Example2 Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.242212 5108 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.242216 5108 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.242222 5108 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.242226 5108 feature_gate.go:328] unrecognized feature gate: DualReplica Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.242231 5108 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.242648 5108 flags.go:64] FLAG: --address="0.0.0.0" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.242665 5108 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.242677 5108 flags.go:64] FLAG: --anonymous-auth="true" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.242686 5108 flags.go:64] FLAG: --application-metrics-count-limit="100" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.242704 5108 flags.go:64] FLAG: --authentication-token-webhook="false" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.242711 5108 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.242719 5108 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.242727 5108 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.242733 5108 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.242738 5108 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.242745 5108 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.242751 5108 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.242756 5108 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.242762 5108 flags.go:64] FLAG: --cgroup-root="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.242767 5108 flags.go:64] FLAG: --cgroups-per-qos="true" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.242773 5108 flags.go:64] FLAG: --client-ca-file="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.242778 5108 flags.go:64] FLAG: --cloud-config="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.242783 5108 flags.go:64] FLAG: --cloud-provider="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.242819 5108 flags.go:64] FLAG: --cluster-dns="[]" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.242829 5108 flags.go:64] FLAG: --cluster-domain="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.242834 5108 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.242840 5108 flags.go:64] FLAG: --config-dir="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.242846 5108 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.242852 5108 flags.go:64] FLAG: --container-log-max-files="5" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.242858 5108 flags.go:64] FLAG: --container-log-max-size="10Mi" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.242865 5108 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.242870 5108 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.242876 5108 flags.go:64] FLAG: --containerd-namespace="k8s.io" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.242881 5108 flags.go:64] FLAG: --contention-profiling="false" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.242887 5108 flags.go:64] FLAG: --cpu-cfs-quota="true" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.242892 5108 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.242897 5108 flags.go:64] FLAG: --cpu-manager-policy="none" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.242903 5108 flags.go:64] FLAG: --cpu-manager-policy-options="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.242910 5108 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.242915 5108 flags.go:64] FLAG: --enable-controller-attach-detach="true" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.242920 5108 flags.go:64] FLAG: --enable-debugging-handlers="true" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.242926 5108 flags.go:64] FLAG: --enable-load-reader="false" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.242931 5108 flags.go:64] FLAG: --enable-server="true" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.242936 5108 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.242943 5108 flags.go:64] FLAG: --event-burst="100" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.242948 5108 flags.go:64] FLAG: --event-qps="50" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.242954 5108 flags.go:64] FLAG: --event-storage-age-limit="default=0" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.242959 5108 flags.go:64] FLAG: --event-storage-event-limit="default=0" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.242964 5108 flags.go:64] FLAG: --eviction-hard="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.242971 5108 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.242976 5108 flags.go:64] FLAG: --eviction-minimum-reclaim="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.242982 5108 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.242987 5108 flags.go:64] FLAG: --eviction-soft="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.242992 5108 flags.go:64] FLAG: --eviction-soft-grace-period="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.242998 5108 flags.go:64] FLAG: --exit-on-lock-contention="false" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.243006 5108 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.243011 5108 flags.go:64] FLAG: --experimental-mounter-path="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.243017 5108 flags.go:64] FLAG: --fail-cgroupv1="false" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.243022 5108 flags.go:64] FLAG: --fail-swap-on="true" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.243027 5108 flags.go:64] FLAG: --feature-gates="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.243034 5108 flags.go:64] FLAG: --file-check-frequency="20s" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.243040 5108 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.243046 5108 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.243051 5108 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.243057 5108 flags.go:64] FLAG: --healthz-port="10248" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.243063 5108 flags.go:64] FLAG: --help="false" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.243068 5108 flags.go:64] FLAG: --hostname-override="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.243074 5108 flags.go:64] FLAG: --housekeeping-interval="10s" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.243097 5108 flags.go:64] FLAG: --http-check-frequency="20s" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.243102 5108 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.243107 5108 flags.go:64] FLAG: --image-credential-provider-config="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.243112 5108 flags.go:64] FLAG: --image-gc-high-threshold="85" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.243118 5108 flags.go:64] FLAG: --image-gc-low-threshold="80" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.243123 5108 flags.go:64] FLAG: --image-service-endpoint="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.243128 5108 flags.go:64] FLAG: --kernel-memcg-notification="false" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.243134 5108 flags.go:64] FLAG: --kube-api-burst="100" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.243139 5108 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.243145 5108 flags.go:64] FLAG: --kube-api-qps="50" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.243150 5108 flags.go:64] FLAG: --kube-reserved="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.243155 5108 flags.go:64] FLAG: --kube-reserved-cgroup="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.243160 5108 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.243166 5108 flags.go:64] FLAG: --kubelet-cgroups="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.243171 5108 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.243176 5108 flags.go:64] FLAG: --lock-file="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.243182 5108 flags.go:64] FLAG: --log-cadvisor-usage="false" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.243187 5108 flags.go:64] FLAG: --log-flush-frequency="5s" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.243192 5108 flags.go:64] FLAG: --log-json-info-buffer-size="0" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.243203 5108 flags.go:64] FLAG: --log-json-split-stream="false" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.243208 5108 flags.go:64] FLAG: --log-text-info-buffer-size="0" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.243213 5108 flags.go:64] FLAG: --log-text-split-stream="false" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.243219 5108 flags.go:64] FLAG: --logging-format="text" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.243224 5108 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.243231 5108 flags.go:64] FLAG: --make-iptables-util-chains="true" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.243237 5108 flags.go:64] FLAG: --manifest-url="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.243243 5108 flags.go:64] FLAG: --manifest-url-header="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.243250 5108 flags.go:64] FLAG: --max-housekeeping-interval="15s" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.243256 5108 flags.go:64] FLAG: --max-open-files="1000000" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.243263 5108 flags.go:64] FLAG: --max-pods="110" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.243269 5108 flags.go:64] FLAG: --maximum-dead-containers="-1" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.243274 5108 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.243279 5108 flags.go:64] FLAG: --memory-manager-policy="None" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.243284 5108 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.243289 5108 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.243295 5108 flags.go:64] FLAG: --node-ip="192.168.126.11" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.243300 5108 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhel" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.243313 5108 flags.go:64] FLAG: --node-status-max-images="50" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.243318 5108 flags.go:64] FLAG: --node-status-update-frequency="10s" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.243323 5108 flags.go:64] FLAG: --oom-score-adj="-999" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.243329 5108 flags.go:64] FLAG: --pod-cidr="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.243334 5108 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc2b30e70040205c2536d01ae5c850be1ed2d775cf13249e50328e5085777977" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.243343 5108 flags.go:64] FLAG: --pod-manifest-path="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.243348 5108 flags.go:64] FLAG: --pod-max-pids="-1" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.243354 5108 flags.go:64] FLAG: --pods-per-core="0" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.243359 5108 flags.go:64] FLAG: --port="10250" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.243364 5108 flags.go:64] FLAG: --protect-kernel-defaults="false" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.243370 5108 flags.go:64] FLAG: --provider-id="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.243375 5108 flags.go:64] FLAG: --qos-reserved="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.243380 5108 flags.go:64] FLAG: --read-only-port="10255" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.243386 5108 flags.go:64] FLAG: --register-node="true" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.243393 5108 flags.go:64] FLAG: --register-schedulable="true" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.243398 5108 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.243408 5108 flags.go:64] FLAG: --registry-burst="10" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.243414 5108 flags.go:64] FLAG: --registry-qps="5" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.243419 5108 flags.go:64] FLAG: --reserved-cpus="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.243424 5108 flags.go:64] FLAG: --reserved-memory="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.243430 5108 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.243437 5108 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.243442 5108 flags.go:64] FLAG: --rotate-certificates="false" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.243448 5108 flags.go:64] FLAG: --rotate-server-certificates="false" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.243453 5108 flags.go:64] FLAG: --runonce="false" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.243458 5108 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.243464 5108 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.243470 5108 flags.go:64] FLAG: --seccomp-default="false" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.243475 5108 flags.go:64] FLAG: --serialize-image-pulls="true" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.243480 5108 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.243486 5108 flags.go:64] FLAG: --storage-driver-db="cadvisor" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.243491 5108 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.243497 5108 flags.go:64] FLAG: --storage-driver-password="root" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.243502 5108 flags.go:64] FLAG: --storage-driver-secure="false" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.243507 5108 flags.go:64] FLAG: --storage-driver-table="stats" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.243512 5108 flags.go:64] FLAG: --storage-driver-user="root" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.243517 5108 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.243523 5108 flags.go:64] FLAG: --sync-frequency="1m0s" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.243528 5108 flags.go:64] FLAG: --system-cgroups="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.243533 5108 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.243541 5108 flags.go:64] FLAG: --system-reserved-cgroup="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.243547 5108 flags.go:64] FLAG: --tls-cert-file="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.243552 5108 flags.go:64] FLAG: --tls-cipher-suites="[]" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.243559 5108 flags.go:64] FLAG: --tls-min-version="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.243564 5108 flags.go:64] FLAG: --tls-private-key-file="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.243570 5108 flags.go:64] FLAG: --topology-manager-policy="none" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.243577 5108 flags.go:64] FLAG: --topology-manager-policy-options="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.243582 5108 flags.go:64] FLAG: --topology-manager-scope="container" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.243587 5108 flags.go:64] FLAG: --v="2" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.243594 5108 flags.go:64] FLAG: --version="false" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.243601 5108 flags.go:64] FLAG: --vmodule="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.243608 5108 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.243616 5108 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.243746 5108 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.243754 5108 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.243760 5108 feature_gate.go:328] unrecognized feature gate: OVNObservability Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.243765 5108 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.243784 5108 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.243790 5108 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.243796 5108 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.243801 5108 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.243807 5108 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.243812 5108 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.243817 5108 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.243823 5108 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.243828 5108 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.243833 5108 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.243838 5108 feature_gate.go:328] unrecognized feature gate: Example2 Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.243843 5108 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.243848 5108 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.243853 5108 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.243859 5108 feature_gate.go:328] unrecognized feature gate: GatewayAPI Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.243864 5108 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.243869 5108 feature_gate.go:328] unrecognized feature gate: DualReplica Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.243874 5108 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.243879 5108 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.243885 5108 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.243890 5108 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.243897 5108 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.243902 5108 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.243907 5108 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.243913 5108 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.243918 5108 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.243922 5108 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.243929 5108 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.243934 5108 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.243939 5108 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.243945 5108 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.243950 5108 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.243954 5108 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.243959 5108 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.243964 5108 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.243969 5108 feature_gate.go:328] unrecognized feature gate: SignatureStores Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.243974 5108 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.243979 5108 feature_gate.go:328] unrecognized feature gate: PinnedImages Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.243983 5108 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.243988 5108 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.243993 5108 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.243998 5108 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.244005 5108 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.244012 5108 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.244018 5108 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.244023 5108 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.244029 5108 feature_gate.go:328] unrecognized feature gate: InsightsConfig Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.244035 5108 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.244041 5108 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.244046 5108 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.244050 5108 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.244057 5108 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.244062 5108 feature_gate.go:328] unrecognized feature gate: Example Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.244069 5108 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.244074 5108 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.244098 5108 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.244103 5108 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.244108 5108 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.244112 5108 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.244119 5108 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.244124 5108 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.244129 5108 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.244133 5108 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.244139 5108 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.244144 5108 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.244149 5108 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.244154 5108 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.244159 5108 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.244164 5108 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.244169 5108 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.244173 5108 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.244179 5108 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.244183 5108 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.244188 5108 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.244193 5108 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.244197 5108 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.244202 5108 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.244207 5108 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.244211 5108 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.244216 5108 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.244221 5108 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.244225 5108 feature_gate.go:328] unrecognized feature gate: NewOLM Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.244241 5108 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.252400 5108 server.go:530] "Kubelet version" kubeletVersion="v1.33.5" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.252424 5108 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.252491 5108 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.252500 5108 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.252511 5108 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.252516 5108 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.252521 5108 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.252525 5108 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.252530 5108 feature_gate.go:328] unrecognized feature gate: PinnedImages Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.252534 5108 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.252538 5108 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.252542 5108 feature_gate.go:328] unrecognized feature gate: OVNObservability Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.252546 5108 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.252550 5108 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.252553 5108 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.252568 5108 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.252572 5108 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.252576 5108 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.252580 5108 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.252583 5108 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.252586 5108 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.252590 5108 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.252593 5108 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.252597 5108 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.252601 5108 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.252612 5108 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.252616 5108 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.252620 5108 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.252624 5108 feature_gate.go:328] unrecognized feature gate: InsightsConfig Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.252628 5108 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.252632 5108 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.252636 5108 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.252640 5108 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.252645 5108 feature_gate.go:328] unrecognized feature gate: GatewayAPI Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.252651 5108 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.252657 5108 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.252661 5108 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.252666 5108 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.252670 5108 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.252674 5108 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.252677 5108 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.252680 5108 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.252684 5108 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.252687 5108 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.252690 5108 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.252693 5108 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.252696 5108 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.252700 5108 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.252703 5108 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.252706 5108 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.252709 5108 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.252712 5108 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.252715 5108 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.252719 5108 feature_gate.go:328] unrecognized feature gate: SignatureStores Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.252722 5108 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.252726 5108 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.252729 5108 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.252732 5108 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.252743 5108 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.252746 5108 feature_gate.go:328] unrecognized feature gate: DualReplica Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.252749 5108 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.252752 5108 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.252755 5108 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.252759 5108 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.252762 5108 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.252765 5108 feature_gate.go:328] unrecognized feature gate: NewOLM Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.252769 5108 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.252772 5108 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.252775 5108 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.252778 5108 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.252781 5108 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.252784 5108 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.252788 5108 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.252792 5108 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.252799 5108 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.252803 5108 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.252806 5108 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.252810 5108 feature_gate.go:328] unrecognized feature gate: Example Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.252813 5108 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.252817 5108 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.252820 5108 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.252823 5108 feature_gate.go:328] unrecognized feature gate: Example2 Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.252826 5108 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.252830 5108 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.252833 5108 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.252836 5108 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.252839 5108 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.252843 5108 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.252849 5108 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.253012 5108 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.253020 5108 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.253032 5108 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.253036 5108 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.253039 5108 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.253043 5108 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.253046 5108 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.253049 5108 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.253052 5108 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.253056 5108 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.253059 5108 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.253062 5108 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.253066 5108 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.253069 5108 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.253073 5108 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.253090 5108 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.253094 5108 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.253097 5108 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.253100 5108 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.253103 5108 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.253106 5108 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.253109 5108 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.253113 5108 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.253116 5108 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.253119 5108 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.253122 5108 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.253126 5108 feature_gate.go:328] unrecognized feature gate: PinnedImages Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.253130 5108 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.253133 5108 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.253137 5108 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.253141 5108 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.253144 5108 feature_gate.go:328] unrecognized feature gate: InsightsConfig Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.253147 5108 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.253150 5108 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.253154 5108 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.253163 5108 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.253166 5108 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.253169 5108 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.253173 5108 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.253176 5108 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.253179 5108 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.253182 5108 feature_gate.go:328] unrecognized feature gate: SignatureStores Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.253185 5108 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.253189 5108 feature_gate.go:328] unrecognized feature gate: NewOLM Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.253193 5108 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.253197 5108 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.253200 5108 feature_gate.go:328] unrecognized feature gate: DualReplica Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.253204 5108 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.253207 5108 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.253210 5108 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.253214 5108 feature_gate.go:328] unrecognized feature gate: OVNObservability Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.253217 5108 feature_gate.go:328] unrecognized feature gate: GatewayAPI Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.253221 5108 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.253224 5108 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.253227 5108 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.253230 5108 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.253233 5108 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.253236 5108 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.253240 5108 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.253243 5108 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.253246 5108 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.253249 5108 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.253253 5108 feature_gate.go:328] unrecognized feature gate: Example Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.253257 5108 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.253260 5108 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.253263 5108 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.253267 5108 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.253271 5108 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.253280 5108 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.253283 5108 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.253287 5108 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.253290 5108 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.253293 5108 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.253296 5108 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.253299 5108 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.253303 5108 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.253306 5108 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.253309 5108 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.253313 5108 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.253316 5108 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.253319 5108 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.253322 5108 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.253325 5108 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.253329 5108 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.253332 5108 feature_gate.go:328] unrecognized feature gate: Example2 Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.253335 5108 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.253341 5108 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.253636 5108 server.go:962] "Client rotation is on, will bootstrap in background" Dec 12 14:10:47 crc kubenswrapper[5108]: E1212 14:10:47.258469 5108 bootstrap.go:266] "Unhandled Error" err="part of the existing bootstrap client certificate in /var/lib/kubelet/kubeconfig is expired: 2025-12-03 08:27:53 +0000 UTC" logger="UnhandledError" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.261765 5108 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.261900 5108 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.262542 5108 server.go:1019] "Starting client certificate rotation" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.262753 5108 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kube-apiserver-client-kubelet" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.262856 5108 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.269406 5108 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.272669 5108 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 12 14:10:47 crc kubenswrapper[5108]: E1212 14:10:47.273164 5108 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.217:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.279624 5108 log.go:25] "Validated CRI v1 runtime API" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.302343 5108 log.go:25] "Validated CRI v1 image API" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.303733 5108 server.go:1452] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.306284 5108 fs.go:135] Filesystem UUIDs: map[19e76f87-96b8-4794-9744-0b33dca22d5b:/dev/vda3 2025-12-12-14-04-48-00:/dev/sr0 5eb7c122-420e-4494-80ec-41664070d7b6:/dev/vda4 7B77-95E7:/dev/vda2] Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.306318 5108 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:45 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:31 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:46 fsType:tmpfs blockSize:0} composefs_0-33:{mountpoint:/ major:0 minor:33 fsType:overlay blockSize:0}] Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.323981 5108 manager.go:217] Machine: {Timestamp:2025-12-12 14:10:47.322666786 +0000 UTC m=+0.230657965 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33649926144 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:80bc4fba336e4ca1bc9d28a8be52a356 SystemUUID:fba542e1-ce5a-4556-a3dc-e51e5c5391bd BootID:b9ecfb5f-c09b-4d5a-8f60-9bac4baf1e5d Filesystems:[{Device:/tmp DeviceMajor:0 DeviceMinor:31 Capacity:16824963072 Type:vfs Inodes:1048576 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6729986048 Type:vfs Inodes:819200 HasInodes:true} {Device:composefs_0-33 DeviceMajor:0 DeviceMinor:33 Capacity:6545408 Type:vfs Inodes:18446744073709551615 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:45 Capacity:3364990976 Type:vfs Inodes:821531 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:46 Capacity:1073741824 Type:vfs Inodes:4107657 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16824963072 Type:vfs Inodes:4107657 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:37:dc:34 Speed:0 Mtu:1500} {Name:br-int MacAddress:b2:a9:9f:57:07:84 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:37:dc:34 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:59:8f:3b Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:6a:17:7f Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:f7:c9:f2 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:e5:f9:76 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:02:c9:87:b1:7d:fe Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:6a:36:bf:7a:11:f9 Speed:0 Mtu:1500} {Name:tap0 MacAddress:5a:94:ef:e4:0c:ee Speed:10 Mtu:1500}] Topology:[{Id:0 Memory:33649926144 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.324207 5108 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.324347 5108 manager.go:233] Version: {KernelVersion:5.14.0-570.57.1.el9_6.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 9.6.20251021-0 (Plow) DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.325287 5108 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.325321 5108 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.325499 5108 topology_manager.go:138] "Creating topology manager with none policy" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.325509 5108 container_manager_linux.go:306] "Creating device plugin manager" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.325528 5108 manager.go:141] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.325693 5108 server.go:72] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.326094 5108 state_mem.go:36] "Initialized new in-memory state store" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.326250 5108 server.go:1267] "Using root directory" path="/var/lib/kubelet" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.327270 5108 kubelet.go:491] "Attempting to sync node with API server" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.327384 5108 kubelet.go:386] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.327411 5108 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.327424 5108 kubelet.go:397] "Adding apiserver pod source" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.327446 5108 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 12 14:10:47 crc kubenswrapper[5108]: E1212 14:10:47.328972 5108 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.217:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 12 14:10:47 crc kubenswrapper[5108]: E1212 14:10:47.328992 5108 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.217:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.329429 5108 state_checkpoint.go:81] "State checkpoint: restored pod resource state from checkpoint" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.329453 5108 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.330957 5108 state_checkpoint.go:81] "State checkpoint: restored pod resource state from checkpoint" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.330975 5108 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.332696 5108 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="cri-o" version="1.33.5-3.rhaos4.20.gitd0ea985.el9" apiVersion="v1" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.332960 5108 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-server-current.pem" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.333501 5108 kubelet.go:953] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.333997 5108 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.334027 5108 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.334037 5108 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.334045 5108 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.334052 5108 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.334061 5108 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/secret" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.334070 5108 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.334093 5108 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.334102 5108 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/fc" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.334116 5108 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.334133 5108 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/projected" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.334274 5108 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.334501 5108 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/csi" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.334519 5108 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/image" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.335301 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.217:6443: connect: connection refused Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.345680 5108 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.345763 5108 server.go:1295] "Started kubelet" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.346003 5108 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.346184 5108 server_v1.go:47] "podresources" method="list" useActivePods=true Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.346348 5108 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.347177 5108 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 12 14:10:47 crc systemd[1]: Started Kubernetes Kubelet. Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.347630 5108 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 12 14:10:47 crc kubenswrapper[5108]: E1212 14:10:47.347444 5108 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.217:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.18807d23f6b67ed5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:47.345716949 +0000 UTC m=+0.253708108,LastTimestamp:2025-12-12 14:10:47.345716949 +0000 UTC m=+0.253708108,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.348024 5108 server.go:317] "Adding debug handlers to kubelet server" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.348158 5108 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kubelet-serving" Dec 12 14:10:47 crc kubenswrapper[5108]: E1212 14:10:47.349271 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.349433 5108 volume_manager.go:295] "The desired_state_of_world populator starts" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.349449 5108 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.349563 5108 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 12 14:10:47 crc kubenswrapper[5108]: E1212 14:10:47.350382 5108 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.217:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 12 14:10:47 crc kubenswrapper[5108]: E1212 14:10:47.351488 5108 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.217:6443: connect: connection refused" interval="200ms" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.354138 5108 factory.go:221] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.354177 5108 factory.go:55] Registering systemd factory Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.354191 5108 factory.go:223] Registration of the systemd container factory successfully Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.354669 5108 factory.go:153] Registering CRI-O factory Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.354687 5108 factory.go:223] Registration of the crio container factory successfully Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.354872 5108 factory.go:103] Registering Raw factory Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.354910 5108 manager.go:1196] Started watching for new ooms in manager Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.356535 5108 manager.go:319] Starting recovery of all containers Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.374144 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.374415 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="42a11a02-47e1-488f-b270-2679d3298b0e" volumeName="kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.374428 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.374437 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.374445 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.374453 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.374461 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.374469 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.374479 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.374487 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.374495 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.374503 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.374510 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" volumeName="kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.374520 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.374530 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.374539 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3a14caf222afb62aaabdc47808b6f944" volumeName="kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.374548 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.374556 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" volumeName="kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.374564 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.374572 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.374580 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" volumeName="kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.374589 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.374597 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" volumeName="kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.374605 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.374616 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7e2c886-118e-43bb-bef1-c78134de392b" volumeName="kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.374623 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.374631 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" volumeName="kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.374639 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.374651 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.374678 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.374686 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.374695 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.374705 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.374713 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.374721 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a208c9c2-333b-4b4a-be0d-bc32ec38a821" volumeName="kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.374751 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.374760 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.374769 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.374776 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.374784 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.374795 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.374805 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="42a11a02-47e1-488f-b270-2679d3298b0e" volumeName="kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.374813 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.374821 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.374830 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" volumeName="kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.374838 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0effdbcf-dd7d-404d-9d48-77536d665a5d" volumeName="kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.374846 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.374854 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.374865 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.374875 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.374886 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.374894 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.374901 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.374925 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.374933 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.374942 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.374969 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.374978 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.374987 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.374994 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.375003 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.375011 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.375023 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.375033 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.375042 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.375050 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.375058 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.375065 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.375073 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.375182 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.375190 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.375199 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.375208 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.375216 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.375224 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f863fff9-286a-45fa-b8f0-8a86994b8440" volumeName="kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.375232 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.375240 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.375248 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.375257 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.375265 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.375274 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.375282 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.375290 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.375317 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.375325 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.375333 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.375341 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.375350 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.375359 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.375368 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.375376 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" volumeName="kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.375383 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.375392 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.375400 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.375407 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.375416 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.375424 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.375431 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3a14caf222afb62aaabdc47808b6f944" volumeName="kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.375440 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.375448 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.375455 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.375463 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.375471 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.375479 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.375487 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.375495 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.375503 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.375519 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.375526 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" volumeName="kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.375534 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.375543 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.375552 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.375585 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.375593 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.375601 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.375609 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.375616 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.375624 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.375632 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.375640 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af41de71-79cf-4590-bbe9-9e8b848862cb" volumeName="kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.375648 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e093be35-bb62-4843-b2e8-094545761610" volumeName="kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.375656 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.375664 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.375672 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.375679 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.375687 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5ebfebf6-3ecd-458e-943f-bb25b52e2718" volumeName="kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.375695 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.375703 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.375711 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.375719 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.375727 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.375740 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.375749 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.375757 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.375765 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.375773 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.375784 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.375792 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.375803 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.375811 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.375820 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" volumeName="kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.375829 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.375836 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.375843 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.375851 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.375861 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.375872 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.375883 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7e2c886-118e-43bb-bef1-c78134de392b" volumeName="kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.375891 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b638b8f4bb0070e40528db779baf6a2" volumeName="kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.375899 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.375907 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="34177974-8d82-49d2-a763-391d0df3bbd8" volumeName="kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.375915 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.375922 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.375929 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.375937 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.375944 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.375952 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.375960 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.375967 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.375975 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.375984 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.375991 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.375998 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.376006 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.376013 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.376021 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.376029 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.376037 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.376048 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.376055 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.376062 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.376070 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.376092 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.376101 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.376109 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.376116 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.376124 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.376132 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.376140 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.376147 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.376155 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="34177974-8d82-49d2-a763-391d0df3bbd8" volumeName="kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.376164 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.376172 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.376179 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.376187 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.376195 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.376217 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.376227 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.376234 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.376245 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.377001 5108 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b1264ac67579ad07e7e9003054d44fe40dd55285a4b2f7dc74e48be1aee0868a/globalmount" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.377026 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.377049 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20c5c5b4bed930554494851fe3cb2b2a" volumeName="kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.377057 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.377065 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.377091 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.377102 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.377113 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.377123 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.377133 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.377148 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.377158 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.377168 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.377177 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.377189 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.377199 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="17b87002-b798-480a-8e17-83053d698239" volumeName="kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.377210 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.377222 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ee8fbd3-1f81-4666-96da-5afc70819f1a" volumeName="kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.377234 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a208c9c2-333b-4b4a-be0d-bc32ec38a821" volumeName="kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.377244 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.377255 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.377271 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.377283 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.377294 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.377305 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.377316 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.377326 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.377337 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.377347 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.377358 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.377370 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.377380 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.377394 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.377405 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.377417 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.377471 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.377483 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.377493 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.377503 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.377513 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.377522 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.377533 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.377545 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.377555 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ee8fbd3-1f81-4666-96da-5afc70819f1a" volumeName="kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.377565 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.377575 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5ebfebf6-3ecd-458e-943f-bb25b52e2718" volumeName="kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.377585 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" volumeName="kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.377595 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.377605 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.377615 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.377625 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.377637 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.377648 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle" seLinuxMountContext="" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.377659 5108 reconstruct.go:97] "Volume reconstruction finished" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.377666 5108 reconciler.go:26] "Reconciler: start to sync state" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.384476 5108 manager.go:324] Recovery completed Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.397797 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.399038 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.399074 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.399108 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.400315 5108 cpu_manager.go:222] "Starting CPU manager" policy="none" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.400332 5108 cpu_manager.go:223] "Reconciling" reconcilePeriod="10s" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.400347 5108 state_mem.go:36] "Initialized new in-memory state store" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.404226 5108 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.406102 5108 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.406145 5108 status_manager.go:230] "Starting to sync pod status with apiserver" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.406174 5108 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.406186 5108 kubelet.go:2451] "Starting kubelet main sync loop" Dec 12 14:10:47 crc kubenswrapper[5108]: E1212 14:10:47.406289 5108 kubelet.go:2475] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.407216 5108 policy_none.go:49] "None policy: Start" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.407249 5108 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.407266 5108 state_mem.go:35] "Initializing new in-memory state store" Dec 12 14:10:47 crc kubenswrapper[5108]: E1212 14:10:47.408856 5108 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.217:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 12 14:10:47 crc kubenswrapper[5108]: E1212 14:10:47.450298 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.452737 5108 manager.go:341] "Starting Device Plugin manager" Dec 12 14:10:47 crc kubenswrapper[5108]: E1212 14:10:47.452788 5108 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.452805 5108 server.go:85] "Starting device plugin registration server" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.453261 5108 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.453313 5108 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.453645 5108 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.453726 5108 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.453740 5108 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 12 14:10:47 crc kubenswrapper[5108]: E1212 14:10:47.462314 5108 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="non-existent label \"crio-containers\"" Dec 12 14:10:47 crc kubenswrapper[5108]: E1212 14:10:47.462371 5108 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.507033 5108 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.507273 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.508011 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.508066 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.508109 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.508827 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.509057 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.509160 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.509277 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.509310 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.509360 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.510051 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.510162 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.510220 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.510296 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.510332 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.510345 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.510812 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.510835 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.510847 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.511071 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.511110 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.511122 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.511816 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.512133 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.512187 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.512343 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.512370 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.512382 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.512965 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.513190 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.513222 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.513195 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.513256 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.513235 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.513395 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.513418 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.513431 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.513849 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.513891 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.513901 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.514161 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.514216 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.514620 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.514645 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.514654 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:10:47 crc kubenswrapper[5108]: E1212 14:10:47.542693 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 14:10:47 crc kubenswrapper[5108]: E1212 14:10:47.551798 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 14:10:47 crc kubenswrapper[5108]: E1212 14:10:47.552261 5108 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.217:6443: connect: connection refused" interval="400ms" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.554438 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.555137 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.555166 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.555174 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.555190 5108 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 12 14:10:47 crc kubenswrapper[5108]: E1212 14:10:47.555589 5108 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.217:6443: connect: connection refused" node="crc" Dec 12 14:10:47 crc kubenswrapper[5108]: E1212 14:10:47.558755 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.580779 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.580828 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.580859 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.581027 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.581073 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.581117 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.581177 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.581258 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.581292 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.581319 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.581341 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.581364 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.581387 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.581411 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.581435 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.581457 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.581478 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.581498 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.581520 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.581550 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.581569 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.581571 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.581638 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.581665 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-run-kubernetes\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.581867 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.581881 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.582189 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.582263 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-run-kubernetes\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.582462 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.582517 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 12 14:10:47 crc kubenswrapper[5108]: E1212 14:10:47.589649 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 14:10:47 crc kubenswrapper[5108]: E1212 14:10:47.597358 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.682905 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.682957 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.682986 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.682992 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.683009 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.683028 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.683048 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.683067 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.683110 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.683116 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.683130 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.683145 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.683154 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.683168 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.683183 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.683186 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.683199 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.683208 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.683211 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.683224 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.683231 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.683237 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.683253 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.683275 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.683295 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.683316 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.683337 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.683168 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.683364 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.683384 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.683417 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.683439 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.756467 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.757423 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.757452 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.757461 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.757484 5108 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 12 14:10:47 crc kubenswrapper[5108]: E1212 14:10:47.757881 5108 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.217:6443: connect: connection refused" node="crc" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.843558 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.852380 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.859926 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.866316 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod20c5c5b4bed930554494851fe3cb2b2a.slice/crio-19ae390945e52eaeda530d10ed43761c07dfb6407d3453104004a410cefd9362 WatchSource:0}: Error finding container 19ae390945e52eaeda530d10ed43761c07dfb6407d3453104004a410cefd9362: Status 404 returned error can't find the container with id 19ae390945e52eaeda530d10ed43761c07dfb6407d3453104004a410cefd9362 Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.873263 5108 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.874619 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3a14caf222afb62aaabdc47808b6f944.slice/crio-9ee834a3aee1cffeed5259cb4d0aa9e4cc90979ae85d2c52713b899aa9d8f079 WatchSource:0}: Error finding container 9ee834a3aee1cffeed5259cb4d0aa9e4cc90979ae85d2c52713b899aa9d8f079: Status 404 returned error can't find the container with id 9ee834a3aee1cffeed5259cb4d0aa9e4cc90979ae85d2c52713b899aa9d8f079 Dec 12 14:10:47 crc kubenswrapper[5108]: W1212 14:10:47.881156 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9f0bc7fcb0822a2c13eb2d22cd8c0641.slice/crio-1dfdd75416aad60ebf7a68b9779f5cbd1da733a5095be4c303cea2b5a23b5e3f WatchSource:0}: Error finding container 1dfdd75416aad60ebf7a68b9779f5cbd1da733a5095be4c303cea2b5a23b5e3f: Status 404 returned error can't find the container with id 1dfdd75416aad60ebf7a68b9779f5cbd1da733a5095be4c303cea2b5a23b5e3f Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.889882 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 12 14:10:47 crc kubenswrapper[5108]: I1212 14:10:47.897631 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 12 14:10:47 crc kubenswrapper[5108]: E1212 14:10:47.953858 5108 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.217:6443: connect: connection refused" interval="800ms" Dec 12 14:10:48 crc kubenswrapper[5108]: I1212 14:10:48.158829 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:10:48 crc kubenswrapper[5108]: I1212 14:10:48.161632 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:10:48 crc kubenswrapper[5108]: I1212 14:10:48.161676 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:10:48 crc kubenswrapper[5108]: I1212 14:10:48.161692 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:10:48 crc kubenswrapper[5108]: I1212 14:10:48.161719 5108 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 12 14:10:48 crc kubenswrapper[5108]: E1212 14:10:48.162201 5108 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.217:6443: connect: connection refused" node="crc" Dec 12 14:10:48 crc kubenswrapper[5108]: I1212 14:10:48.336546 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.217:6443: connect: connection refused Dec 12 14:10:48 crc kubenswrapper[5108]: E1212 14:10:48.389409 5108 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.217:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 12 14:10:48 crc kubenswrapper[5108]: I1212 14:10:48.417812 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerStarted","Data":"bd5ec9690ba6bd343756405ba75cd1a1c43371e5ae4089055a25b13bd13192fb"} Dec 12 14:10:48 crc kubenswrapper[5108]: I1212 14:10:48.417882 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerStarted","Data":"86e077d5fc651313785e3455ff1a03b08462b29caf1b24c2bbccf255002c86ee"} Dec 12 14:10:48 crc kubenswrapper[5108]: I1212 14:10:48.417993 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:10:48 crc kubenswrapper[5108]: I1212 14:10:48.418799 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:10:48 crc kubenswrapper[5108]: I1212 14:10:48.418851 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:10:48 crc kubenswrapper[5108]: I1212 14:10:48.418867 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:10:48 crc kubenswrapper[5108]: E1212 14:10:48.419129 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 14:10:48 crc kubenswrapper[5108]: I1212 14:10:48.420387 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"5facef15506b672b50e49a2af35e560c16278885552fa5ca84aef580a62600cd"} Dec 12 14:10:48 crc kubenswrapper[5108]: I1212 14:10:48.420452 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"2157e2bc003839c22e276894f0bf73d28c52cf43b36e024dc25bcf9df2b835db"} Dec 12 14:10:48 crc kubenswrapper[5108]: I1212 14:10:48.420768 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:10:48 crc kubenswrapper[5108]: I1212 14:10:48.422372 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:10:48 crc kubenswrapper[5108]: I1212 14:10:48.422408 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:10:48 crc kubenswrapper[5108]: I1212 14:10:48.422420 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:10:48 crc kubenswrapper[5108]: E1212 14:10:48.422642 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 14:10:48 crc kubenswrapper[5108]: I1212 14:10:48.423712 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"3071f4069d4990c1608adf401cdef4fd74f2c0fa8bfa273c3449a82eb8145bd1"} Dec 12 14:10:48 crc kubenswrapper[5108]: I1212 14:10:48.423758 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"1dfdd75416aad60ebf7a68b9779f5cbd1da733a5095be4c303cea2b5a23b5e3f"} Dec 12 14:10:48 crc kubenswrapper[5108]: I1212 14:10:48.425487 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"e70ff1b9b3269f3fe2eabb3406b5c246c2bf71aebcb12730e80991aedd1f8ece"} Dec 12 14:10:48 crc kubenswrapper[5108]: I1212 14:10:48.425515 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"9ee834a3aee1cffeed5259cb4d0aa9e4cc90979ae85d2c52713b899aa9d8f079"} Dec 12 14:10:48 crc kubenswrapper[5108]: I1212 14:10:48.425682 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:10:48 crc kubenswrapper[5108]: I1212 14:10:48.426319 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:10:48 crc kubenswrapper[5108]: I1212 14:10:48.426356 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:10:48 crc kubenswrapper[5108]: I1212 14:10:48.426369 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:10:48 crc kubenswrapper[5108]: E1212 14:10:48.426604 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 14:10:48 crc kubenswrapper[5108]: I1212 14:10:48.427487 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"65794f3f33dea6a6573ec10459ed3d71dddf5f88abe63c08ea870e714bd3f860"} Dec 12 14:10:48 crc kubenswrapper[5108]: I1212 14:10:48.427515 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"19ae390945e52eaeda530d10ed43761c07dfb6407d3453104004a410cefd9362"} Dec 12 14:10:48 crc kubenswrapper[5108]: I1212 14:10:48.427638 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:10:48 crc kubenswrapper[5108]: I1212 14:10:48.428419 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:10:48 crc kubenswrapper[5108]: I1212 14:10:48.428479 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:10:48 crc kubenswrapper[5108]: I1212 14:10:48.428492 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:10:48 crc kubenswrapper[5108]: E1212 14:10:48.428802 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 14:10:48 crc kubenswrapper[5108]: E1212 14:10:48.482700 5108 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.217:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 12 14:10:48 crc kubenswrapper[5108]: E1212 14:10:48.585829 5108 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.217:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 12 14:10:48 crc kubenswrapper[5108]: E1212 14:10:48.754481 5108 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.217:6443: connect: connection refused" interval="1.6s" Dec 12 14:10:48 crc kubenswrapper[5108]: E1212 14:10:48.788623 5108 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.217:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 12 14:10:48 crc kubenswrapper[5108]: I1212 14:10:48.962660 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:10:48 crc kubenswrapper[5108]: I1212 14:10:48.965645 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:10:48 crc kubenswrapper[5108]: I1212 14:10:48.965690 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:10:48 crc kubenswrapper[5108]: I1212 14:10:48.965714 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:10:48 crc kubenswrapper[5108]: I1212 14:10:48.965744 5108 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 12 14:10:48 crc kubenswrapper[5108]: E1212 14:10:48.966430 5108 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.217:6443: connect: connection refused" node="crc" Dec 12 14:10:49 crc kubenswrapper[5108]: I1212 14:10:49.337032 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.217:6443: connect: connection refused Dec 12 14:10:49 crc kubenswrapper[5108]: I1212 14:10:49.376934 5108 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Dec 12 14:10:49 crc kubenswrapper[5108]: E1212 14:10:49.378116 5108 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.217:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 12 14:10:49 crc kubenswrapper[5108]: I1212 14:10:49.431945 5108 generic.go:358] "Generic (PLEG): container finished" podID="0b638b8f4bb0070e40528db779baf6a2" containerID="5facef15506b672b50e49a2af35e560c16278885552fa5ca84aef580a62600cd" exitCode=0 Dec 12 14:10:49 crc kubenswrapper[5108]: I1212 14:10:49.432175 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerDied","Data":"5facef15506b672b50e49a2af35e560c16278885552fa5ca84aef580a62600cd"} Dec 12 14:10:49 crc kubenswrapper[5108]: I1212 14:10:49.432659 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:10:49 crc kubenswrapper[5108]: I1212 14:10:49.434247 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:10:49 crc kubenswrapper[5108]: I1212 14:10:49.434299 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:10:49 crc kubenswrapper[5108]: I1212 14:10:49.434313 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:10:49 crc kubenswrapper[5108]: E1212 14:10:49.434582 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 14:10:49 crc kubenswrapper[5108]: I1212 14:10:49.436449 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"9606526e72945e2bdcc544b1a59b749b245e207eb82139068ef6f2660ed3f967"} Dec 12 14:10:49 crc kubenswrapper[5108]: I1212 14:10:49.436482 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"efd2c4628b568e7a7df978e9dd2882e3117a909a190a67291da3d1a34da4c54b"} Dec 12 14:10:49 crc kubenswrapper[5108]: I1212 14:10:49.436495 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"b1b915a7062859a28f39946201a150036d69bbd22dab9497b756fb8e1fe85006"} Dec 12 14:10:49 crc kubenswrapper[5108]: I1212 14:10:49.436608 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:10:49 crc kubenswrapper[5108]: I1212 14:10:49.437041 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:10:49 crc kubenswrapper[5108]: I1212 14:10:49.437097 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:10:49 crc kubenswrapper[5108]: I1212 14:10:49.437109 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:10:49 crc kubenswrapper[5108]: E1212 14:10:49.437329 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 14:10:49 crc kubenswrapper[5108]: I1212 14:10:49.438595 5108 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="e70ff1b9b3269f3fe2eabb3406b5c246c2bf71aebcb12730e80991aedd1f8ece" exitCode=0 Dec 12 14:10:49 crc kubenswrapper[5108]: I1212 14:10:49.438633 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"e70ff1b9b3269f3fe2eabb3406b5c246c2bf71aebcb12730e80991aedd1f8ece"} Dec 12 14:10:49 crc kubenswrapper[5108]: I1212 14:10:49.438725 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:10:49 crc kubenswrapper[5108]: I1212 14:10:49.439289 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:10:49 crc kubenswrapper[5108]: I1212 14:10:49.439326 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:10:49 crc kubenswrapper[5108]: I1212 14:10:49.439340 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:10:49 crc kubenswrapper[5108]: E1212 14:10:49.439560 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 14:10:49 crc kubenswrapper[5108]: I1212 14:10:49.440329 5108 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="65794f3f33dea6a6573ec10459ed3d71dddf5f88abe63c08ea870e714bd3f860" exitCode=0 Dec 12 14:10:49 crc kubenswrapper[5108]: I1212 14:10:49.440346 5108 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="0d5aa17a706a840cf2378c515099d25962d06e875982e7f63fb86384306330aa" exitCode=0 Dec 12 14:10:49 crc kubenswrapper[5108]: I1212 14:10:49.440395 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"65794f3f33dea6a6573ec10459ed3d71dddf5f88abe63c08ea870e714bd3f860"} Dec 12 14:10:49 crc kubenswrapper[5108]: I1212 14:10:49.440425 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"0d5aa17a706a840cf2378c515099d25962d06e875982e7f63fb86384306330aa"} Dec 12 14:10:49 crc kubenswrapper[5108]: I1212 14:10:49.440555 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:10:49 crc kubenswrapper[5108]: I1212 14:10:49.441131 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:10:49 crc kubenswrapper[5108]: I1212 14:10:49.441161 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:10:49 crc kubenswrapper[5108]: I1212 14:10:49.441172 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:10:49 crc kubenswrapper[5108]: E1212 14:10:49.441346 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 14:10:49 crc kubenswrapper[5108]: I1212 14:10:49.441661 5108 generic.go:358] "Generic (PLEG): container finished" podID="4e08c320b1e9e2405e6e0107bdf7eeb4" containerID="bd5ec9690ba6bd343756405ba75cd1a1c43371e5ae4089055a25b13bd13192fb" exitCode=0 Dec 12 14:10:49 crc kubenswrapper[5108]: I1212 14:10:49.441689 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerDied","Data":"bd5ec9690ba6bd343756405ba75cd1a1c43371e5ae4089055a25b13bd13192fb"} Dec 12 14:10:49 crc kubenswrapper[5108]: I1212 14:10:49.441719 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerStarted","Data":"932ed3c5bcf1854189f1d3805b64414035dc03af3e619ca1b55677669ac65b25"} Dec 12 14:10:49 crc kubenswrapper[5108]: I1212 14:10:49.441878 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:10:49 crc kubenswrapper[5108]: I1212 14:10:49.442467 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:10:49 crc kubenswrapper[5108]: I1212 14:10:49.442664 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:10:49 crc kubenswrapper[5108]: I1212 14:10:49.442696 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:10:49 crc kubenswrapper[5108]: I1212 14:10:49.442708 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:10:49 crc kubenswrapper[5108]: E1212 14:10:49.442904 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 14:10:49 crc kubenswrapper[5108]: I1212 14:10:49.442936 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:10:49 crc kubenswrapper[5108]: I1212 14:10:49.442963 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:10:49 crc kubenswrapper[5108]: I1212 14:10:49.442975 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:10:49 crc kubenswrapper[5108]: E1212 14:10:49.973280 5108 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.217:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.18807d23f6b67ed5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:47.345716949 +0000 UTC m=+0.253708108,LastTimestamp:2025-12-12 14:10:47.345716949 +0000 UTC m=+0.253708108,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:10:50 crc kubenswrapper[5108]: I1212 14:10:50.451363 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"c06468165e42ba1af3edb9beb8d2ddba87a68dd250bf7528e5ba90b68294caf2"} Dec 12 14:10:50 crc kubenswrapper[5108]: I1212 14:10:50.451414 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"36786b270a1470e2233eba132595c6cd183b0501ef34eb3f35cdfec4ab982a2c"} Dec 12 14:10:50 crc kubenswrapper[5108]: I1212 14:10:50.451429 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"c1bde64be63bd13e11fa4a3b8b946caaf68e32617cbbd90496fd1c71c5a4fd68"} Dec 12 14:10:50 crc kubenswrapper[5108]: I1212 14:10:50.451543 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:10:50 crc kubenswrapper[5108]: I1212 14:10:50.451976 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:10:50 crc kubenswrapper[5108]: I1212 14:10:50.451997 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:10:50 crc kubenswrapper[5108]: I1212 14:10:50.452007 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:10:50 crc kubenswrapper[5108]: E1212 14:10:50.452177 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 14:10:50 crc kubenswrapper[5108]: I1212 14:10:50.454198 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"9d9ecc493821faa4c2f5a082084284166d20dc85fb2afd25d3e8014cb324543f"} Dec 12 14:10:50 crc kubenswrapper[5108]: I1212 14:10:50.454225 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"a68f2bbf194146ec111e9aa5753f597962de280e957a0a3ef23dd79173160003"} Dec 12 14:10:50 crc kubenswrapper[5108]: I1212 14:10:50.454237 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"d58b2258046b38d981e06423ad8bc68ff6163533de83a36d6beda29bc02b1da8"} Dec 12 14:10:50 crc kubenswrapper[5108]: I1212 14:10:50.454247 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"c86b90b7193b7901a601e3c65d607b6a5d462bb1a15a5010bf19e9c6d6036966"} Dec 12 14:10:50 crc kubenswrapper[5108]: I1212 14:10:50.458550 5108 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="20b42b583304f935da956f0e33a5b2c9f6cec4f570c43a0daab976b1cf6c01da" exitCode=0 Dec 12 14:10:50 crc kubenswrapper[5108]: I1212 14:10:50.458702 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"20b42b583304f935da956f0e33a5b2c9f6cec4f570c43a0daab976b1cf6c01da"} Dec 12 14:10:50 crc kubenswrapper[5108]: I1212 14:10:50.458782 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:10:50 crc kubenswrapper[5108]: I1212 14:10:50.459018 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:10:50 crc kubenswrapper[5108]: I1212 14:10:50.462895 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:10:50 crc kubenswrapper[5108]: I1212 14:10:50.462932 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:10:50 crc kubenswrapper[5108]: I1212 14:10:50.462944 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:10:50 crc kubenswrapper[5108]: E1212 14:10:50.463320 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 14:10:50 crc kubenswrapper[5108]: I1212 14:10:50.463761 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:10:50 crc kubenswrapper[5108]: I1212 14:10:50.463781 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:10:50 crc kubenswrapper[5108]: I1212 14:10:50.463791 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:10:50 crc kubenswrapper[5108]: E1212 14:10:50.463946 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 14:10:50 crc kubenswrapper[5108]: I1212 14:10:50.566568 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:10:50 crc kubenswrapper[5108]: I1212 14:10:50.567561 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:10:50 crc kubenswrapper[5108]: I1212 14:10:50.567621 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:10:50 crc kubenswrapper[5108]: I1212 14:10:50.567632 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:10:50 crc kubenswrapper[5108]: I1212 14:10:50.567655 5108 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 12 14:10:51 crc kubenswrapper[5108]: I1212 14:10:51.220234 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 14:10:51 crc kubenswrapper[5108]: I1212 14:10:51.464410 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"2df077ce0b1a499df875fcdf82c2b9b01adf124460e7e02924b3e0d7a810d83a"} Dec 12 14:10:51 crc kubenswrapper[5108]: I1212 14:10:51.464624 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:10:51 crc kubenswrapper[5108]: I1212 14:10:51.465277 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:10:51 crc kubenswrapper[5108]: I1212 14:10:51.465311 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:10:51 crc kubenswrapper[5108]: I1212 14:10:51.465323 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:10:51 crc kubenswrapper[5108]: E1212 14:10:51.465520 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 14:10:51 crc kubenswrapper[5108]: I1212 14:10:51.470425 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"f81d65dd2849c53db5ca57925ec1c2aa5bab16f99922d614bd44602e714189dd"} Dec 12 14:10:51 crc kubenswrapper[5108]: I1212 14:10:51.470453 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"094d61ef533aaf36ef1292d5d3cf93d98b904768d2dd2302c105813cf086a12a"} Dec 12 14:10:51 crc kubenswrapper[5108]: I1212 14:10:51.470460 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:10:51 crc kubenswrapper[5108]: I1212 14:10:51.470465 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"75e4059b371028e5ac87de2124153ec0f015515b7d7cccb6fe820baf9f24e855"} Dec 12 14:10:51 crc kubenswrapper[5108]: I1212 14:10:51.470589 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"0ccda5f0b75af7937025a7de6c6e514dfd9fa150a51bc9b030ed54f2fadb4e47"} Dec 12 14:10:51 crc kubenswrapper[5108]: I1212 14:10:51.471052 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:10:51 crc kubenswrapper[5108]: I1212 14:10:51.471168 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:10:51 crc kubenswrapper[5108]: I1212 14:10:51.471196 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:10:51 crc kubenswrapper[5108]: E1212 14:10:51.471863 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 14:10:52 crc kubenswrapper[5108]: I1212 14:10:52.403286 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 14:10:52 crc kubenswrapper[5108]: I1212 14:10:52.477662 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"4a6bc7349df8b4362b693300a658ae0d6e3d61236dad5a9bbb2a14f0954d2640"} Dec 12 14:10:52 crc kubenswrapper[5108]: I1212 14:10:52.477906 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:10:52 crc kubenswrapper[5108]: I1212 14:10:52.477938 5108 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 12 14:10:52 crc kubenswrapper[5108]: I1212 14:10:52.477995 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:10:52 crc kubenswrapper[5108]: I1212 14:10:52.478416 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:10:52 crc kubenswrapper[5108]: I1212 14:10:52.478859 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:10:52 crc kubenswrapper[5108]: I1212 14:10:52.478901 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:10:52 crc kubenswrapper[5108]: I1212 14:10:52.478914 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:10:52 crc kubenswrapper[5108]: I1212 14:10:52.479049 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:10:52 crc kubenswrapper[5108]: I1212 14:10:52.479146 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:10:52 crc kubenswrapper[5108]: I1212 14:10:52.479177 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:10:52 crc kubenswrapper[5108]: I1212 14:10:52.479202 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:10:52 crc kubenswrapper[5108]: I1212 14:10:52.479222 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:10:52 crc kubenswrapper[5108]: I1212 14:10:52.479234 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:10:52 crc kubenswrapper[5108]: E1212 14:10:52.479249 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 14:10:52 crc kubenswrapper[5108]: E1212 14:10:52.479862 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 14:10:52 crc kubenswrapper[5108]: E1212 14:10:52.480038 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 14:10:53 crc kubenswrapper[5108]: I1212 14:10:53.202222 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-etcd/etcd-crc" Dec 12 14:10:53 crc kubenswrapper[5108]: I1212 14:10:53.206586 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 14:10:53 crc kubenswrapper[5108]: I1212 14:10:53.214972 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 14:10:53 crc kubenswrapper[5108]: I1212 14:10:53.275969 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 14:10:53 crc kubenswrapper[5108]: I1212 14:10:53.443596 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 14:10:53 crc kubenswrapper[5108]: I1212 14:10:53.480622 5108 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 12 14:10:53 crc kubenswrapper[5108]: I1212 14:10:53.480677 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:10:53 crc kubenswrapper[5108]: I1212 14:10:53.480700 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:10:53 crc kubenswrapper[5108]: I1212 14:10:53.480681 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:10:53 crc kubenswrapper[5108]: I1212 14:10:53.481410 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:10:53 crc kubenswrapper[5108]: I1212 14:10:53.481434 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:10:53 crc kubenswrapper[5108]: I1212 14:10:53.481446 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:10:53 crc kubenswrapper[5108]: I1212 14:10:53.481456 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:10:53 crc kubenswrapper[5108]: I1212 14:10:53.481481 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:10:53 crc kubenswrapper[5108]: I1212 14:10:53.481492 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:10:53 crc kubenswrapper[5108]: I1212 14:10:53.481550 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:10:53 crc kubenswrapper[5108]: I1212 14:10:53.481562 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:10:53 crc kubenswrapper[5108]: I1212 14:10:53.481573 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:10:53 crc kubenswrapper[5108]: E1212 14:10:53.481847 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 14:10:53 crc kubenswrapper[5108]: E1212 14:10:53.482145 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 14:10:53 crc kubenswrapper[5108]: E1212 14:10:53.482351 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 14:10:53 crc kubenswrapper[5108]: I1212 14:10:53.606877 5108 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Dec 12 14:10:53 crc kubenswrapper[5108]: I1212 14:10:53.696847 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 14:10:53 crc kubenswrapper[5108]: I1212 14:10:53.988514 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 14:10:54 crc kubenswrapper[5108]: I1212 14:10:54.127625 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 12 14:10:54 crc kubenswrapper[5108]: I1212 14:10:54.127871 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:10:54 crc kubenswrapper[5108]: I1212 14:10:54.128690 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:10:54 crc kubenswrapper[5108]: I1212 14:10:54.128747 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:10:54 crc kubenswrapper[5108]: I1212 14:10:54.128759 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:10:54 crc kubenswrapper[5108]: E1212 14:10:54.129172 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 14:10:54 crc kubenswrapper[5108]: I1212 14:10:54.482198 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:10:54 crc kubenswrapper[5108]: I1212 14:10:54.482214 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:10:54 crc kubenswrapper[5108]: I1212 14:10:54.482361 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:10:54 crc kubenswrapper[5108]: I1212 14:10:54.482938 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:10:54 crc kubenswrapper[5108]: I1212 14:10:54.482974 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:10:54 crc kubenswrapper[5108]: I1212 14:10:54.483001 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:10:54 crc kubenswrapper[5108]: I1212 14:10:54.483008 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:10:54 crc kubenswrapper[5108]: I1212 14:10:54.483034 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:10:54 crc kubenswrapper[5108]: I1212 14:10:54.482941 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:10:54 crc kubenswrapper[5108]: I1212 14:10:54.483048 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:10:54 crc kubenswrapper[5108]: I1212 14:10:54.483175 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:10:54 crc kubenswrapper[5108]: I1212 14:10:54.483200 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:10:54 crc kubenswrapper[5108]: E1212 14:10:54.483609 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 14:10:54 crc kubenswrapper[5108]: E1212 14:10:54.483781 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 14:10:54 crc kubenswrapper[5108]: E1212 14:10:54.484043 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 14:10:55 crc kubenswrapper[5108]: I1212 14:10:55.403636 5108 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": context deadline exceeded" start-of-body= Dec 12 14:10:55 crc kubenswrapper[5108]: I1212 14:10:55.403741 5108 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": context deadline exceeded" Dec 12 14:10:55 crc kubenswrapper[5108]: I1212 14:10:55.484031 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:10:55 crc kubenswrapper[5108]: I1212 14:10:55.484604 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:10:55 crc kubenswrapper[5108]: I1212 14:10:55.484658 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:10:55 crc kubenswrapper[5108]: I1212 14:10:55.484675 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:10:55 crc kubenswrapper[5108]: E1212 14:10:55.485338 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 14:10:57 crc kubenswrapper[5108]: E1212 14:10:57.462665 5108 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 12 14:10:57 crc kubenswrapper[5108]: I1212 14:10:57.980707 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Dec 12 14:10:57 crc kubenswrapper[5108]: I1212 14:10:57.981370 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:10:57 crc kubenswrapper[5108]: I1212 14:10:57.982824 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:10:57 crc kubenswrapper[5108]: I1212 14:10:57.982896 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:10:57 crc kubenswrapper[5108]: I1212 14:10:57.982915 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:10:57 crc kubenswrapper[5108]: E1212 14:10:57.983667 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 14:11:00 crc kubenswrapper[5108]: I1212 14:11:00.337407 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Dec 12 14:11:00 crc kubenswrapper[5108]: E1212 14:11:00.355837 5108 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="3.2s" Dec 12 14:11:00 crc kubenswrapper[5108]: I1212 14:11:00.531533 5108 trace.go:236] Trace[1612143011]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (12-Dec-2025 14:10:50.529) (total time: 10002ms): Dec 12 14:11:00 crc kubenswrapper[5108]: Trace[1612143011]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10002ms (14:11:00.531) Dec 12 14:11:00 crc kubenswrapper[5108]: Trace[1612143011]: [10.002238505s] [10.002238505s] END Dec 12 14:11:00 crc kubenswrapper[5108]: E1212 14:11:00.531823 5108 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 12 14:11:00 crc kubenswrapper[5108]: E1212 14:11:00.568653 5108 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="crc" Dec 12 14:11:00 crc kubenswrapper[5108]: I1212 14:11:00.639128 5108 trace.go:236] Trace[1141118908]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (12-Dec-2025 14:10:50.637) (total time: 10001ms): Dec 12 14:11:00 crc kubenswrapper[5108]: Trace[1141118908]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (14:11:00.639) Dec 12 14:11:00 crc kubenswrapper[5108]: Trace[1141118908]: [10.00109038s] [10.00109038s] END Dec 12 14:11:00 crc kubenswrapper[5108]: E1212 14:11:00.639184 5108 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 12 14:11:00 crc kubenswrapper[5108]: I1212 14:11:00.874571 5108 trace.go:236] Trace[782197306]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (12-Dec-2025 14:10:50.872) (total time: 10001ms): Dec 12 14:11:00 crc kubenswrapper[5108]: Trace[782197306]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (14:11:00.874) Dec 12 14:11:00 crc kubenswrapper[5108]: Trace[782197306]: [10.001633023s] [10.001633023s] END Dec 12 14:11:00 crc kubenswrapper[5108]: E1212 14:11:00.875037 5108 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 12 14:11:00 crc kubenswrapper[5108]: I1212 14:11:00.957550 5108 trace.go:236] Trace[1482725849]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (12-Dec-2025 14:10:50.955) (total time: 10001ms): Dec 12 14:11:00 crc kubenswrapper[5108]: Trace[1482725849]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (14:11:00.957) Dec 12 14:11:00 crc kubenswrapper[5108]: Trace[1482725849]: [10.001961762s] [10.001961762s] END Dec 12 14:11:00 crc kubenswrapper[5108]: E1212 14:11:00.957595 5108 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 12 14:11:01 crc kubenswrapper[5108]: I1212 14:11:01.019688 5108 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 12 14:11:01 crc kubenswrapper[5108]: I1212 14:11:01.019759 5108 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 12 14:11:01 crc kubenswrapper[5108]: I1212 14:11:01.027750 5108 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 12 14:11:01 crc kubenswrapper[5108]: I1212 14:11:01.028132 5108 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 12 14:11:03 crc kubenswrapper[5108]: E1212 14:11:03.561023 5108 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="6.4s" Dec 12 14:11:03 crc kubenswrapper[5108]: I1212 14:11:03.769632 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:11:03 crc kubenswrapper[5108]: I1212 14:11:03.771209 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:03 crc kubenswrapper[5108]: I1212 14:11:03.771269 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:03 crc kubenswrapper[5108]: I1212 14:11:03.771293 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:03 crc kubenswrapper[5108]: I1212 14:11:03.771330 5108 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 12 14:11:03 crc kubenswrapper[5108]: E1212 14:11:03.784939 5108 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 12 14:11:03 crc kubenswrapper[5108]: I1212 14:11:03.997856 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 14:11:03 crc kubenswrapper[5108]: I1212 14:11:03.998187 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:11:04 crc kubenswrapper[5108]: I1212 14:11:04.000023 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:04 crc kubenswrapper[5108]: I1212 14:11:04.000063 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:04 crc kubenswrapper[5108]: I1212 14:11:04.000096 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:04 crc kubenswrapper[5108]: E1212 14:11:04.000467 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 14:11:04 crc kubenswrapper[5108]: I1212 14:11:04.003529 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 14:11:04 crc kubenswrapper[5108]: I1212 14:11:04.489170 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 14:11:04 crc kubenswrapper[5108]: I1212 14:11:04.489328 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:11:04 crc kubenswrapper[5108]: I1212 14:11:04.490331 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:04 crc kubenswrapper[5108]: I1212 14:11:04.490358 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:04 crc kubenswrapper[5108]: I1212 14:11:04.490367 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:04 crc kubenswrapper[5108]: E1212 14:11:04.490622 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 14:11:04 crc kubenswrapper[5108]: I1212 14:11:04.506151 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:11:04 crc kubenswrapper[5108]: I1212 14:11:04.506851 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:04 crc kubenswrapper[5108]: I1212 14:11:04.506992 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:04 crc kubenswrapper[5108]: I1212 14:11:04.507168 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:04 crc kubenswrapper[5108]: E1212 14:11:04.507706 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 14:11:05 crc kubenswrapper[5108]: E1212 14:11:05.177900 5108 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 12 14:11:05 crc kubenswrapper[5108]: I1212 14:11:05.404754 5108 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 14:11:05 crc kubenswrapper[5108]: I1212 14:11:05.404848 5108 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 12 14:11:05 crc kubenswrapper[5108]: E1212 14:11:05.610111 5108 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 12 14:11:05 crc kubenswrapper[5108]: E1212 14:11:05.937378 5108 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 12 14:11:06 crc kubenswrapper[5108]: I1212 14:11:06.019996 5108 reflector.go:430] "Caches populated" logger="kubernetes.io/kube-apiserver-client-kubelet" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Dec 12 14:11:06 crc kubenswrapper[5108]: E1212 14:11:06.024137 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18807d23f6b67ed5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:47.345716949 +0000 UTC m=+0.253708108,LastTimestamp:2025-12-12 14:10:47.345716949 +0000 UTC m=+0.253708108,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:11:06 crc kubenswrapper[5108]: E1212 14:11:06.028767 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18807d23f9e47b10 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:47.399062288 +0000 UTC m=+0.307053467,LastTimestamp:2025-12-12 14:10:47.399062288 +0000 UTC m=+0.307053467,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:11:06 crc kubenswrapper[5108]: E1212 14:11:06.033823 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18807d23f9e5141d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:47.399101469 +0000 UTC m=+0.307092628,LastTimestamp:2025-12-12 14:10:47.399101469 +0000 UTC m=+0.307092628,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:11:06 crc kubenswrapper[5108]: E1212 14:11:06.039166 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18807d23f9e54559 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:47.399114073 +0000 UTC m=+0.307105242,LastTimestamp:2025-12-12 14:10:47.399114073 +0000 UTC m=+0.307105242,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:11:06 crc kubenswrapper[5108]: E1212 14:11:06.043371 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18807d23fd4783b4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:47.455884212 +0000 UTC m=+0.363875371,LastTimestamp:2025-12-12 14:10:47.455884212 +0000 UTC m=+0.363875371,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:11:06 crc kubenswrapper[5108]: E1212 14:11:06.048173 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18807d23f9e47b10\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18807d23f9e47b10 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:47.399062288 +0000 UTC m=+0.307053467,LastTimestamp:2025-12-12 14:10:47.508034599 +0000 UTC m=+0.416025758,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:11:06 crc kubenswrapper[5108]: E1212 14:11:06.054794 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18807d23f9e5141d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18807d23f9e5141d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:47.399101469 +0000 UTC m=+0.307092628,LastTimestamp:2025-12-12 14:10:47.50807238 +0000 UTC m=+0.416063539,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:11:06 crc kubenswrapper[5108]: E1212 14:11:06.060312 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18807d23f9e54559\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18807d23f9e54559 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:47.399114073 +0000 UTC m=+0.307105242,LastTimestamp:2025-12-12 14:10:47.508116099 +0000 UTC m=+0.416107258,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:11:06 crc kubenswrapper[5108]: E1212 14:11:06.064726 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18807d23f9e47b10\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18807d23f9e47b10 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:47.399062288 +0000 UTC m=+0.307053467,LastTimestamp:2025-12-12 14:10:47.509296309 +0000 UTC m=+0.417287468,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:11:06 crc kubenswrapper[5108]: E1212 14:11:06.071430 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18807d23f9e5141d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18807d23f9e5141d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:47.399101469 +0000 UTC m=+0.307092628,LastTimestamp:2025-12-12 14:10:47.50931718 +0000 UTC m=+0.417308339,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:11:06 crc kubenswrapper[5108]: E1212 14:11:06.077757 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18807d23f9e54559\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18807d23f9e54559 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:47.399114073 +0000 UTC m=+0.307105242,LastTimestamp:2025-12-12 14:10:47.509372423 +0000 UTC m=+0.417363582,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:11:06 crc kubenswrapper[5108]: E1212 14:11:06.081604 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18807d23f9e47b10\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18807d23f9e47b10 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:47.399062288 +0000 UTC m=+0.307053467,LastTimestamp:2025-12-12 14:10:47.510316129 +0000 UTC m=+0.418307288,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:11:06 crc kubenswrapper[5108]: E1212 14:11:06.085349 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18807d23f9e5141d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18807d23f9e5141d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:47.399101469 +0000 UTC m=+0.307092628,LastTimestamp:2025-12-12 14:10:47.510339817 +0000 UTC m=+0.418330976,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:11:06 crc kubenswrapper[5108]: E1212 14:11:06.090142 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18807d23f9e54559\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18807d23f9e54559 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:47.399114073 +0000 UTC m=+0.307105242,LastTimestamp:2025-12-12 14:10:47.510350702 +0000 UTC m=+0.418341861,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:11:06 crc kubenswrapper[5108]: E1212 14:11:06.095403 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18807d23f9e47b10\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18807d23f9e47b10 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:47.399062288 +0000 UTC m=+0.307053467,LastTimestamp:2025-12-12 14:10:47.510827649 +0000 UTC m=+0.418818808,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:11:06 crc kubenswrapper[5108]: E1212 14:11:06.101098 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18807d23f9e5141d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18807d23f9e5141d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:47.399101469 +0000 UTC m=+0.307092628,LastTimestamp:2025-12-12 14:10:47.510842491 +0000 UTC m=+0.418833650,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:11:06 crc kubenswrapper[5108]: E1212 14:11:06.105716 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18807d23f9e54559\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18807d23f9e54559 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:47.399114073 +0000 UTC m=+0.307105242,LastTimestamp:2025-12-12 14:10:47.510853146 +0000 UTC m=+0.418844305,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:11:06 crc kubenswrapper[5108]: E1212 14:11:06.109788 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18807d23f9e47b10\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18807d23f9e47b10 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:47.399062288 +0000 UTC m=+0.307053467,LastTimestamp:2025-12-12 14:10:47.511097876 +0000 UTC m=+0.419089035,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:11:06 crc kubenswrapper[5108]: E1212 14:11:06.113390 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18807d23f9e5141d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18807d23f9e5141d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:47.399101469 +0000 UTC m=+0.307092628,LastTimestamp:2025-12-12 14:10:47.511117556 +0000 UTC m=+0.419108715,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:11:06 crc kubenswrapper[5108]: E1212 14:11:06.120800 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18807d23f9e54559\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18807d23f9e54559 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:47.399114073 +0000 UTC m=+0.307105242,LastTimestamp:2025-12-12 14:10:47.511128211 +0000 UTC m=+0.419119370,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:11:06 crc kubenswrapper[5108]: E1212 14:11:06.124879 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18807d23f9e47b10\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18807d23f9e47b10 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:47.399062288 +0000 UTC m=+0.307053467,LastTimestamp:2025-12-12 14:10:47.512362775 +0000 UTC m=+0.420353934,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:11:06 crc kubenswrapper[5108]: E1212 14:11:06.131306 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18807d23f9e5141d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18807d23f9e5141d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:47.399101469 +0000 UTC m=+0.307092628,LastTimestamp:2025-12-12 14:10:47.512377558 +0000 UTC m=+0.420368717,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:11:06 crc kubenswrapper[5108]: E1212 14:11:06.135805 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18807d23f9e54559\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18807d23f9e54559 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:47.399114073 +0000 UTC m=+0.307105242,LastTimestamp:2025-12-12 14:10:47.512387453 +0000 UTC m=+0.420378612,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:11:06 crc kubenswrapper[5108]: E1212 14:11:06.140345 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18807d23f9e47b10\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18807d23f9e47b10 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:47.399062288 +0000 UTC m=+0.307053467,LastTimestamp:2025-12-12 14:10:47.513214607 +0000 UTC m=+0.421205766,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:11:06 crc kubenswrapper[5108]: E1212 14:11:06.144401 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18807d23f9e5141d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18807d23f9e5141d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:47.399101469 +0000 UTC m=+0.307092628,LastTimestamp:2025-12-12 14:10:47.51322954 +0000 UTC m=+0.421220689,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:11:06 crc kubenswrapper[5108]: E1212 14:11:06.149447 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18807d24162d6eed openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:47.873605357 +0000 UTC m=+0.781596516,LastTimestamp:2025-12-12 14:10:47.873605357 +0000 UTC m=+0.781596516,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:11:06 crc kubenswrapper[5108]: E1212 14:11:06.154343 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18807d241687fe79 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:47.879540345 +0000 UTC m=+0.787531504,LastTimestamp:2025-12-12 14:10:47.879540345 +0000 UTC m=+0.787531504,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:11:06 crc kubenswrapper[5108]: E1212 14:11:06.159441 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18807d2416d0a155 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:47.884300629 +0000 UTC m=+0.792291788,LastTimestamp:2025-12-12 14:10:47.884300629 +0000 UTC m=+0.792291788,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:11:06 crc kubenswrapper[5108]: E1212 14:11:06.163850 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.18807d2419364d06 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:47.92451815 +0000 UTC m=+0.832509309,LastTimestamp:2025-12-12 14:10:47.92451815 +0000 UTC m=+0.832509309,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:11:06 crc kubenswrapper[5108]: E1212 14:11:06.168289 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18807d241945da0a openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:47.92553729 +0000 UTC m=+0.833528459,LastTimestamp:2025-12-12 14:10:47.92553729 +0000 UTC m=+0.833528459,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:11:06 crc kubenswrapper[5108]: E1212 14:11:06.178424 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18807d24345ea943 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container: kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:48.380148035 +0000 UTC m=+1.288139194,LastTimestamp:2025-12-12 14:10:48.380148035 +0000 UTC m=+1.288139194,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:11:06 crc kubenswrapper[5108]: E1212 14:11:06.185664 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18807d243472f180 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:48.381477248 +0000 UTC m=+1.289468427,LastTimestamp:2025-12-12 14:10:48.381477248 +0000 UTC m=+1.289468427,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:11:06 crc kubenswrapper[5108]: E1212 14:11:06.191947 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.18807d243472f18a openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:48.381477258 +0000 UTC m=+1.289468437,LastTimestamp:2025-12-12 14:10:48.381477258 +0000 UTC m=+1.289468437,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:11:06 crc kubenswrapper[5108]: E1212 14:11:06.209509 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18807d24347ac0ba openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:48.38198905 +0000 UTC m=+1.289980239,LastTimestamp:2025-12-12 14:10:48.38198905 +0000 UTC m=+1.289980239,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:11:06 crc kubenswrapper[5108]: E1212 14:11:06.216924 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18807d24347c4b44 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Created,Message:Created container: wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:48.382090052 +0000 UTC m=+1.290081211,LastTimestamp:2025-12-12 14:10:48.382090052 +0000 UTC m=+1.290081211,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:11:06 crc kubenswrapper[5108]: E1212 14:11:06.222690 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18807d243549745c openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:48.395535452 +0000 UTC m=+1.303526611,LastTimestamp:2025-12-12 14:10:48.395535452 +0000 UTC m=+1.303526611,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:11:06 crc kubenswrapper[5108]: E1212 14:11:06.227919 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18807d243564dd9a openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:48.397331866 +0000 UTC m=+1.305323025,LastTimestamp:2025-12-12 14:10:48.397331866 +0000 UTC m=+1.305323025,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:11:06 crc kubenswrapper[5108]: E1212 14:11:06.232513 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18807d243599a484 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:48.40079066 +0000 UTC m=+1.308781819,LastTimestamp:2025-12-12 14:10:48.40079066 +0000 UTC m=+1.308781819,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:11:06 crc kubenswrapper[5108]: E1212 14:11:06.236742 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18807d2435c7cb61 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:48.403815265 +0000 UTC m=+1.311806424,LastTimestamp:2025-12-12 14:10:48.403815265 +0000 UTC m=+1.311806424,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:11:06 crc kubenswrapper[5108]: E1212 14:11:06.241828 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.18807d2435c88bd8 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:48.403864536 +0000 UTC m=+1.311855695,LastTimestamp:2025-12-12 14:10:48.403864536 +0000 UTC m=+1.311855695,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:11:06 crc kubenswrapper[5108]: E1212 14:11:06.247049 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18807d2435cf5801 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Started,Message:Started container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:48.404310017 +0000 UTC m=+1.312301176,LastTimestamp:2025-12-12 14:10:48.404310017 +0000 UTC m=+1.312301176,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:11:06 crc kubenswrapper[5108]: E1212 14:11:06.251842 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.18807d2436c3485e openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:48.420296798 +0000 UTC m=+1.328287957,LastTimestamp:2025-12-12 14:10:48.420296798 +0000 UTC m=+1.328287957,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:11:06 crc kubenswrapper[5108]: E1212 14:11:06.256191 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18807d24375a6538 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:48.43020012 +0000 UTC m=+1.338191279,LastTimestamp:2025-12-12 14:10:48.43020012 +0000 UTC m=+1.338191279,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:11:06 crc kubenswrapper[5108]: E1212 14:11:06.261069 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18807d2449434827 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container: cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:48.730675239 +0000 UTC m=+1.638666398,LastTimestamp:2025-12-12 14:10:48.730675239 +0000 UTC m=+1.638666398,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:11:06 crc kubenswrapper[5108]: E1212 14:11:06.266069 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.18807d24495e39d2 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:48.732441042 +0000 UTC m=+1.640432231,LastTimestamp:2025-12-12 14:10:48.732441042 +0000 UTC m=+1.640432231,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:11:06 crc kubenswrapper[5108]: E1212 14:11:06.271816 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18807d2449641c3a openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Created,Message:Created container: etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:48.732826682 +0000 UTC m=+1.640817861,LastTimestamp:2025-12-12 14:10:48.732826682 +0000 UTC m=+1.640817861,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:11:06 crc kubenswrapper[5108]: E1212 14:11:06.277426 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18807d2449eb20d2 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:48.741675218 +0000 UTC m=+1.649666377,LastTimestamp:2025-12-12 14:10:48.741675218 +0000 UTC m=+1.649666377,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:11:06 crc kubenswrapper[5108]: E1212 14:11:06.281542 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18807d244a68b094 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:48.74990402 +0000 UTC m=+1.657895179,LastTimestamp:2025-12-12 14:10:48.74990402 +0000 UTC m=+1.657895179,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:11:06 crc kubenswrapper[5108]: E1212 14:11:06.286457 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.18807d244a8c4b9a openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:48.752237466 +0000 UTC m=+1.660228625,LastTimestamp:2025-12-12 14:10:48.752237466 +0000 UTC m=+1.660228625,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:11:06 crc kubenswrapper[5108]: E1212 14:11:06.290729 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18807d244b1b85ef openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Started,Message:Started container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:48.761624047 +0000 UTC m=+1.669615196,LastTimestamp:2025-12-12 14:10:48.761624047 +0000 UTC m=+1.669615196,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:11:06 crc kubenswrapper[5108]: E1212 14:11:06.294451 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18807d245f854ae9 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Created,Message:Created container: kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:49.104100073 +0000 UTC m=+2.012091242,LastTimestamp:2025-12-12 14:10:49.104100073 +0000 UTC m=+2.012091242,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:11:06 crc kubenswrapper[5108]: E1212 14:11:06.299019 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18807d245ffb7971 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Started,Message:Started container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:49.111845233 +0000 UTC m=+2.019836392,LastTimestamp:2025-12-12 14:10:49.111845233 +0000 UTC m=+2.019836392,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:11:06 crc kubenswrapper[5108]: I1212 14:11:06.303577 5108 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": EOF" start-of-body= Dec 12 14:11:06 crc kubenswrapper[5108]: I1212 14:11:06.303673 5108 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": EOF" Dec 12 14:11:06 crc kubenswrapper[5108]: E1212 14:11:06.305185 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18807d2460083b37 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:49.112681271 +0000 UTC m=+2.020672430,LastTimestamp:2025-12-12 14:10:49.112681271 +0000 UTC m=+2.020672430,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:11:06 crc kubenswrapper[5108]: E1212 14:11:06.308835 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18807d246da18a46 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Created,Message:Created container: kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:49.340832326 +0000 UTC m=+2.248823485,LastTimestamp:2025-12-12 14:10:49.340832326 +0000 UTC m=+2.248823485,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:11:06 crc kubenswrapper[5108]: I1212 14:11:06.309305 5108 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:49346->192.168.126.11:17697: read: connection reset by peer" start-of-body= Dec 12 14:11:06 crc kubenswrapper[5108]: I1212 14:11:06.309379 5108 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:49346->192.168.126.11:17697: read: connection reset by peer" Dec 12 14:11:06 crc kubenswrapper[5108]: E1212 14:11:06.312842 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18807d246e719df6 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Started,Message:Started container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:49.354468854 +0000 UTC m=+2.262460023,LastTimestamp:2025-12-12 14:10:49.354468854 +0000 UTC m=+2.262460023,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:11:06 crc kubenswrapper[5108]: E1212 14:11:06.316903 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18807d247349bd08 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:49.435741448 +0000 UTC m=+2.343732607,LastTimestamp:2025-12-12 14:10:49.435741448 +0000 UTC m=+2.343732607,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:11:06 crc kubenswrapper[5108]: E1212 14:11:06.321056 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18807d2473ae48a0 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:49.442330784 +0000 UTC m=+2.350321953,LastTimestamp:2025-12-12 14:10:49.442330784 +0000 UTC m=+2.350321953,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:11:06 crc kubenswrapper[5108]: E1212 14:11:06.324935 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18807d2473bf0be4 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:49.443429348 +0000 UTC m=+2.351420497,LastTimestamp:2025-12-12 14:10:49.443429348 +0000 UTC m=+2.351420497,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:11:06 crc kubenswrapper[5108]: E1212 14:11:06.328372 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18807d247f05ee6a openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Created,Message:Created container: kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:49.632624234 +0000 UTC m=+2.540615393,LastTimestamp:2025-12-12 14:10:49.632624234 +0000 UTC m=+2.540615393,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:11:06 crc kubenswrapper[5108]: E1212 14:11:06.332675 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18807d247f0a6b32 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Created,Message:Created container: etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:49.632918322 +0000 UTC m=+2.540909481,LastTimestamp:2025-12-12 14:10:49.632918322 +0000 UTC m=+2.540909481,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:11:06 crc kubenswrapper[5108]: E1212 14:11:06.333331 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18807d247f6432f8 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Created,Message:Created container: kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:49.638802168 +0000 UTC m=+2.546793327,LastTimestamp:2025-12-12 14:10:49.638802168 +0000 UTC m=+2.546793327,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:11:06 crc kubenswrapper[5108]: E1212 14:11:06.336660 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18807d247f8770b0 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Started,Message:Started container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:49.641111728 +0000 UTC m=+2.549102907,LastTimestamp:2025-12-12 14:10:49.641111728 +0000 UTC m=+2.549102907,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:11:06 crc kubenswrapper[5108]: E1212 14:11:06.340456 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18807d247f99e8cc openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:49.642322124 +0000 UTC m=+2.550313303,LastTimestamp:2025-12-12 14:10:49.642322124 +0000 UTC m=+2.550313303,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:11:06 crc kubenswrapper[5108]: E1212 14:11:06.344594 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18807d247f9f7a98 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Started,Message:Started container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:49.642687128 +0000 UTC m=+2.550678297,LastTimestamp:2025-12-12 14:10:49.642687128 +0000 UTC m=+2.550678297,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:11:06 crc kubenswrapper[5108]: I1212 14:11:06.344758 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 14:11:06 crc kubenswrapper[5108]: E1212 14:11:06.347547 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18807d248061aab8 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Started,Message:Started container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:49.655413432 +0000 UTC m=+2.563404601,LastTimestamp:2025-12-12 14:10:49.655413432 +0000 UTC m=+2.563404601,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:11:06 crc kubenswrapper[5108]: E1212 14:11:06.350145 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18807d24807142e5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:49.656435429 +0000 UTC m=+2.564426588,LastTimestamp:2025-12-12 14:10:49.656435429 +0000 UTC m=+2.564426588,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:11:06 crc kubenswrapper[5108]: E1212 14:11:06.352929 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18807d248a75fdbc openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Created,Message:Created container: kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:49.824517564 +0000 UTC m=+2.732508723,LastTimestamp:2025-12-12 14:10:49.824517564 +0000 UTC m=+2.732508723,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:11:06 crc kubenswrapper[5108]: E1212 14:11:06.354206 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18807d248b49beb6 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Started,Message:Started container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:49.838395062 +0000 UTC m=+2.746386221,LastTimestamp:2025-12-12 14:10:49.838395062 +0000 UTC m=+2.746386221,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:11:06 crc kubenswrapper[5108]: E1212 14:11:06.359314 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18807d248b53c119 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Created,Message:Created container: kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:49.839051033 +0000 UTC m=+2.747042202,LastTimestamp:2025-12-12 14:10:49.839051033 +0000 UTC m=+2.747042202,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:11:06 crc kubenswrapper[5108]: E1212 14:11:06.365664 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18807d248b5c9f6d openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:49.839632237 +0000 UTC m=+2.747623396,LastTimestamp:2025-12-12 14:10:49.839632237 +0000 UTC m=+2.747623396,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:11:06 crc kubenswrapper[5108]: E1212 14:11:06.370450 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18807d248c0c40b4 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Started,Message:Started container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:49.851142324 +0000 UTC m=+2.759133483,LastTimestamp:2025-12-12 14:10:49.851142324 +0000 UTC m=+2.759133483,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:11:06 crc kubenswrapper[5108]: E1212 14:11:06.375144 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18807d248c250655 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:49.852765781 +0000 UTC m=+2.760756950,LastTimestamp:2025-12-12 14:10:49.852765781 +0000 UTC m=+2.760756950,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:11:06 crc kubenswrapper[5108]: E1212 14:11:06.379808 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18807d2499bf1fbf openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Created,Message:Created container: kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:50.080968639 +0000 UTC m=+2.988959798,LastTimestamp:2025-12-12 14:10:50.080968639 +0000 UTC m=+2.988959798,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:11:06 crc kubenswrapper[5108]: E1212 14:11:06.386128 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18807d2499d47019 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Created,Message:Created container: kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:50.082365465 +0000 UTC m=+2.990356624,LastTimestamp:2025-12-12 14:10:50.082365465 +0000 UTC m=+2.990356624,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:11:06 crc kubenswrapper[5108]: E1212 14:11:06.390226 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18807d249a6b7929 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Started,Message:Started container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:50.092263721 +0000 UTC m=+3.000254880,LastTimestamp:2025-12-12 14:10:50.092263721 +0000 UTC m=+3.000254880,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:11:06 crc kubenswrapper[5108]: E1212 14:11:06.394821 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18807d249a7d50c8 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:50.093433032 +0000 UTC m=+3.001424191,LastTimestamp:2025-12-12 14:10:50.093433032 +0000 UTC m=+3.001424191,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:11:06 crc kubenswrapper[5108]: E1212 14:11:06.399819 5108 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 12 14:11:06 crc kubenswrapper[5108]: E1212 14:11:06.400008 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18807d249a88e456 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Started,Message:Started container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:50.094191702 +0000 UTC m=+3.002182861,LastTimestamp:2025-12-12 14:10:50.094191702 +0000 UTC m=+3.002182861,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:11:06 crc kubenswrapper[5108]: E1212 14:11:06.404070 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18807d24a73631b7 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Created,Message:Created container: kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:50.306875831 +0000 UTC m=+3.214866990,LastTimestamp:2025-12-12 14:10:50.306875831 +0000 UTC m=+3.214866990,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:11:06 crc kubenswrapper[5108]: E1212 14:11:06.406597 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18807d24a7d4efd7 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Started,Message:Started container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:50.317279191 +0000 UTC m=+3.225270350,LastTimestamp:2025-12-12 14:10:50.317279191 +0000 UTC m=+3.225270350,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:11:06 crc kubenswrapper[5108]: E1212 14:11:06.408424 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18807d24a7e58bd3 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:50.318367699 +0000 UTC m=+3.226358858,LastTimestamp:2025-12-12 14:10:50.318367699 +0000 UTC m=+3.226358858,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:11:06 crc kubenswrapper[5108]: E1212 14:11:06.414211 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18807d24b0afe181 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:50.465845633 +0000 UTC m=+3.373836792,LastTimestamp:2025-12-12 14:10:50.465845633 +0000 UTC m=+3.373836792,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:11:06 crc kubenswrapper[5108]: E1212 14:11:06.418849 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18807d24b48e664f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:50.530760271 +0000 UTC m=+3.438751430,LastTimestamp:2025-12-12 14:10:50.530760271 +0000 UTC m=+3.438751430,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:11:06 crc kubenswrapper[5108]: E1212 14:11:06.423436 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18807d24b587e092 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:50.547110034 +0000 UTC m=+3.455101193,LastTimestamp:2025-12-12 14:10:50.547110034 +0000 UTC m=+3.455101193,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:11:06 crc kubenswrapper[5108]: E1212 14:11:06.427998 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18807d24bc41905d openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Created,Message:Created container: etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:50.659942493 +0000 UTC m=+3.567933662,LastTimestamp:2025-12-12 14:10:50.659942493 +0000 UTC m=+3.567933662,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:11:06 crc kubenswrapper[5108]: E1212 14:11:06.433549 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18807d24bd104848 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Started,Message:Started container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:50.673489992 +0000 UTC m=+3.581481151,LastTimestamp:2025-12-12 14:10:50.673489992 +0000 UTC m=+3.581481151,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:11:06 crc kubenswrapper[5108]: E1212 14:11:06.438610 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18807d24bd242aa6 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:50.674793126 +0000 UTC m=+3.582784285,LastTimestamp:2025-12-12 14:10:50.674793126 +0000 UTC m=+3.582784285,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:11:06 crc kubenswrapper[5108]: E1212 14:11:06.444094 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18807d24cb5b0674 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Created,Message:Created container: etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:50.913269364 +0000 UTC m=+3.821260523,LastTimestamp:2025-12-12 14:10:50.913269364 +0000 UTC m=+3.821260523,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:11:06 crc kubenswrapper[5108]: E1212 14:11:06.448692 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18807d24cc445527 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Started,Message:Started container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:50.928559399 +0000 UTC m=+3.836550558,LastTimestamp:2025-12-12 14:10:50.928559399 +0000 UTC m=+3.836550558,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:11:06 crc kubenswrapper[5108]: E1212 14:11:06.453171 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18807d24cc5635b7 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:50.929730999 +0000 UTC m=+3.837722158,LastTimestamp:2025-12-12 14:10:50.929730999 +0000 UTC m=+3.837722158,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:11:06 crc kubenswrapper[5108]: E1212 14:11:06.457374 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18807d24db250bea openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Created,Message:Created container: etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:51.178167274 +0000 UTC m=+4.086158443,LastTimestamp:2025-12-12 14:10:51.178167274 +0000 UTC m=+4.086158443,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:11:06 crc kubenswrapper[5108]: E1212 14:11:06.461113 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18807d24dc261fc6 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Started,Message:Started container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:51.19501511 +0000 UTC m=+4.103006269,LastTimestamp:2025-12-12 14:10:51.19501511 +0000 UTC m=+4.103006269,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:11:06 crc kubenswrapper[5108]: E1212 14:11:06.465810 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18807d24dc3b10d9 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:51.196387545 +0000 UTC m=+4.104378704,LastTimestamp:2025-12-12 14:10:51.196387545 +0000 UTC m=+4.104378704,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:11:06 crc kubenswrapper[5108]: E1212 14:11:06.470163 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18807d24e8f864a4 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Created,Message:Created container: etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:51.410121892 +0000 UTC m=+4.318113051,LastTimestamp:2025-12-12 14:10:51.410121892 +0000 UTC m=+4.318113051,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:11:06 crc kubenswrapper[5108]: E1212 14:11:06.475184 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18807d24e9d88994 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Started,Message:Started container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:51.424811412 +0000 UTC m=+4.332802571,LastTimestamp:2025-12-12 14:10:51.424811412 +0000 UTC m=+4.332802571,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:11:06 crc kubenswrapper[5108]: E1212 14:11:06.479845 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18807d24e9f2963a openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:51.426518586 +0000 UTC m=+4.334509785,LastTimestamp:2025-12-12 14:10:51.426518586 +0000 UTC m=+4.334509785,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:11:06 crc kubenswrapper[5108]: E1212 14:11:06.484657 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18807d24f63a4e54 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Created,Message:Created container: etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:51.632545364 +0000 UTC m=+4.540536533,LastTimestamp:2025-12-12 14:10:51.632545364 +0000 UTC m=+4.540536533,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:11:06 crc kubenswrapper[5108]: E1212 14:11:06.489485 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18807d24f6f00a1d openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Started,Message:Started container etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:51.644455453 +0000 UTC m=+4.552446642,LastTimestamp:2025-12-12 14:10:51.644455453 +0000 UTC m=+4.552446642,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:11:06 crc kubenswrapper[5108]: E1212 14:11:06.496407 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Dec 12 14:11:06 crc kubenswrapper[5108]: &Event{ObjectMeta:{kube-controller-manager-crc.18807d25d701b897 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://localhost:10357/healthz": context deadline exceeded Dec 12 14:11:06 crc kubenswrapper[5108]: body: Dec 12 14:11:06 crc kubenswrapper[5108]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:55.403710615 +0000 UTC m=+8.311701814,LastTimestamp:2025-12-12 14:10:55.403710615 +0000 UTC m=+8.311701814,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Dec 12 14:11:06 crc kubenswrapper[5108]: > Dec 12 14:11:06 crc kubenswrapper[5108]: E1212 14:11:06.500830 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18807d25d7033e41 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://localhost:10357/healthz\": context deadline exceeded,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:55.403810369 +0000 UTC m=+8.311801548,LastTimestamp:2025-12-12 14:10:55.403810369 +0000 UTC m=+8.311801548,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:11:06 crc kubenswrapper[5108]: E1212 14:11:06.505579 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Dec 12 14:11:06 crc kubenswrapper[5108]: &Event{ObjectMeta:{kube-apiserver-crc.18807d2725bf7855 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Dec 12 14:11:06 crc kubenswrapper[5108]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 12 14:11:06 crc kubenswrapper[5108]: Dec 12 14:11:06 crc kubenswrapper[5108]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:11:01.019736149 +0000 UTC m=+13.927727308,LastTimestamp:2025-12-12 14:11:01.019736149 +0000 UTC m=+13.927727308,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Dec 12 14:11:06 crc kubenswrapper[5108]: > Dec 12 14:11:06 crc kubenswrapper[5108]: E1212 14:11:06.510566 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18807d2725c02d15 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:11:01.019782421 +0000 UTC m=+13.927773580,LastTimestamp:2025-12-12 14:11:01.019782421 +0000 UTC m=+13.927773580,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:11:06 crc kubenswrapper[5108]: I1212 14:11:06.512311 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Dec 12 14:11:06 crc kubenswrapper[5108]: I1212 14:11:06.514007 5108 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="2df077ce0b1a499df875fcdf82c2b9b01adf124460e7e02924b3e0d7a810d83a" exitCode=255 Dec 12 14:11:06 crc kubenswrapper[5108]: I1212 14:11:06.514093 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"2df077ce0b1a499df875fcdf82c2b9b01adf124460e7e02924b3e0d7a810d83a"} Dec 12 14:11:06 crc kubenswrapper[5108]: I1212 14:11:06.514319 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:11:06 crc kubenswrapper[5108]: I1212 14:11:06.515260 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:06 crc kubenswrapper[5108]: I1212 14:11:06.515340 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:06 crc kubenswrapper[5108]: I1212 14:11:06.515363 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:06 crc kubenswrapper[5108]: E1212 14:11:06.515786 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 14:11:06 crc kubenswrapper[5108]: I1212 14:11:06.516154 5108 scope.go:117] "RemoveContainer" containerID="2df077ce0b1a499df875fcdf82c2b9b01adf124460e7e02924b3e0d7a810d83a" Dec 12 14:11:06 crc kubenswrapper[5108]: E1212 14:11:06.516113 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.18807d2725bf7855\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Dec 12 14:11:06 crc kubenswrapper[5108]: &Event{ObjectMeta:{kube-apiserver-crc.18807d2725bf7855 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Dec 12 14:11:06 crc kubenswrapper[5108]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 12 14:11:06 crc kubenswrapper[5108]: Dec 12 14:11:06 crc kubenswrapper[5108]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:11:01.019736149 +0000 UTC m=+13.927727308,LastTimestamp:2025-12-12 14:11:01.028055323 +0000 UTC m=+13.936046542,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Dec 12 14:11:06 crc kubenswrapper[5108]: > Dec 12 14:11:06 crc kubenswrapper[5108]: E1212 14:11:06.520780 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.18807d2725c02d15\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18807d2725c02d15 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:11:01.019782421 +0000 UTC m=+13.927773580,LastTimestamp:2025-12-12 14:11:01.028284009 +0000 UTC m=+13.936275208,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:11:06 crc kubenswrapper[5108]: E1212 14:11:06.526432 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Dec 12 14:11:06 crc kubenswrapper[5108]: &Event{ObjectMeta:{kube-controller-manager-crc.18807d282b1e5c13 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://localhost:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Dec 12 14:11:06 crc kubenswrapper[5108]: body: Dec 12 14:11:06 crc kubenswrapper[5108]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:11:05.404808211 +0000 UTC m=+18.312799370,LastTimestamp:2025-12-12 14:11:05.404808211 +0000 UTC m=+18.312799370,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Dec 12 14:11:06 crc kubenswrapper[5108]: > Dec 12 14:11:06 crc kubenswrapper[5108]: E1212 14:11:06.531195 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18807d282b1f5cf5 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:11:05.404873973 +0000 UTC m=+18.312865142,LastTimestamp:2025-12-12 14:11:05.404873973 +0000 UTC m=+18.312865142,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:11:06 crc kubenswrapper[5108]: E1212 14:11:06.536714 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Dec 12 14:11:06 crc kubenswrapper[5108]: &Event{ObjectMeta:{kube-apiserver-crc.18807d2860b1733d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Readiness probe error: Get "https://192.168.126.11:17697/healthz": EOF Dec 12 14:11:06 crc kubenswrapper[5108]: body: Dec 12 14:11:06 crc kubenswrapper[5108]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:11:06.303640381 +0000 UTC m=+19.211631540,LastTimestamp:2025-12-12 14:11:06.303640381 +0000 UTC m=+19.211631540,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Dec 12 14:11:06 crc kubenswrapper[5108]: > Dec 12 14:11:06 crc kubenswrapper[5108]: E1212 14:11:06.544190 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18807d2860b24ce7 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://192.168.126.11:17697/healthz\": EOF,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:11:06.303696103 +0000 UTC m=+19.211687262,LastTimestamp:2025-12-12 14:11:06.303696103 +0000 UTC m=+19.211687262,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:11:06 crc kubenswrapper[5108]: E1212 14:11:06.549424 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Dec 12 14:11:06 crc kubenswrapper[5108]: &Event{ObjectMeta:{kube-apiserver-crc.18807d286108b2d7 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Readiness probe error: Get "https://192.168.126.11:17697/healthz": read tcp 192.168.126.11:49346->192.168.126.11:17697: read: connection reset by peer Dec 12 14:11:06 crc kubenswrapper[5108]: body: Dec 12 14:11:06 crc kubenswrapper[5108]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:11:06.309358295 +0000 UTC m=+19.217349444,LastTimestamp:2025-12-12 14:11:06.309358295 +0000 UTC m=+19.217349444,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Dec 12 14:11:06 crc kubenswrapper[5108]: > Dec 12 14:11:06 crc kubenswrapper[5108]: E1212 14:11:06.554539 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18807d2861095102 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:49346->192.168.126.11:17697: read: connection reset by peer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:11:06.309398786 +0000 UTC m=+19.217389945,LastTimestamp:2025-12-12 14:11:06.309398786 +0000 UTC m=+19.217389945,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:11:06 crc kubenswrapper[5108]: E1212 14:11:06.559195 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.18807d24a7e58bd3\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18807d24a7e58bd3 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:50.318367699 +0000 UTC m=+3.226358858,LastTimestamp:2025-12-12 14:11:06.517365196 +0000 UTC m=+19.425356355,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:11:06 crc kubenswrapper[5108]: E1212 14:11:06.733472 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.18807d24b48e664f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18807d24b48e664f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:50.530760271 +0000 UTC m=+3.438751430,LastTimestamp:2025-12-12 14:11:06.72804776 +0000 UTC m=+19.636038919,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:11:06 crc kubenswrapper[5108]: E1212 14:11:06.750596 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.18807d24b587e092\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18807d24b587e092 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:50.547110034 +0000 UTC m=+3.455101193,LastTimestamp:2025-12-12 14:11:06.743195166 +0000 UTC m=+19.651186335,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:11:07 crc kubenswrapper[5108]: I1212 14:11:07.340607 5108 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 14:11:07 crc kubenswrapper[5108]: I1212 14:11:07.340835 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 14:11:07 crc kubenswrapper[5108]: E1212 14:11:07.463464 5108 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 12 14:11:07 crc kubenswrapper[5108]: I1212 14:11:07.518025 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Dec 12 14:11:07 crc kubenswrapper[5108]: I1212 14:11:07.519392 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"d34cfd58fd741825b24e70e447e91b8210e7201e6a9cff4b19a34fc5e0787e47"} Dec 12 14:11:07 crc kubenswrapper[5108]: I1212 14:11:07.519605 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:11:07 crc kubenswrapper[5108]: I1212 14:11:07.520251 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:07 crc kubenswrapper[5108]: I1212 14:11:07.520289 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:07 crc kubenswrapper[5108]: I1212 14:11:07.520302 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:07 crc kubenswrapper[5108]: E1212 14:11:07.520667 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 14:11:08 crc kubenswrapper[5108]: I1212 14:11:08.028868 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Dec 12 14:11:08 crc kubenswrapper[5108]: I1212 14:11:08.029097 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:11:08 crc kubenswrapper[5108]: I1212 14:11:08.029860 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:08 crc kubenswrapper[5108]: I1212 14:11:08.029899 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:08 crc kubenswrapper[5108]: I1212 14:11:08.029909 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:08 crc kubenswrapper[5108]: E1212 14:11:08.033893 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 14:11:08 crc kubenswrapper[5108]: I1212 14:11:08.050127 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Dec 12 14:11:08 crc kubenswrapper[5108]: I1212 14:11:08.342893 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 14:11:08 crc kubenswrapper[5108]: I1212 14:11:08.523177 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Dec 12 14:11:08 crc kubenswrapper[5108]: I1212 14:11:08.523857 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Dec 12 14:11:08 crc kubenswrapper[5108]: I1212 14:11:08.526655 5108 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="d34cfd58fd741825b24e70e447e91b8210e7201e6a9cff4b19a34fc5e0787e47" exitCode=255 Dec 12 14:11:08 crc kubenswrapper[5108]: I1212 14:11:08.526884 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:11:08 crc kubenswrapper[5108]: I1212 14:11:08.526986 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:11:08 crc kubenswrapper[5108]: I1212 14:11:08.527112 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"d34cfd58fd741825b24e70e447e91b8210e7201e6a9cff4b19a34fc5e0787e47"} Dec 12 14:11:08 crc kubenswrapper[5108]: I1212 14:11:08.527362 5108 scope.go:117] "RemoveContainer" containerID="2df077ce0b1a499df875fcdf82c2b9b01adf124460e7e02924b3e0d7a810d83a" Dec 12 14:11:08 crc kubenswrapper[5108]: I1212 14:11:08.527927 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:08 crc kubenswrapper[5108]: I1212 14:11:08.528018 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:08 crc kubenswrapper[5108]: I1212 14:11:08.528140 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:08 crc kubenswrapper[5108]: I1212 14:11:08.528323 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:08 crc kubenswrapper[5108]: I1212 14:11:08.528346 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:08 crc kubenswrapper[5108]: I1212 14:11:08.528355 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:08 crc kubenswrapper[5108]: E1212 14:11:08.529148 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 14:11:08 crc kubenswrapper[5108]: E1212 14:11:08.529230 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 14:11:08 crc kubenswrapper[5108]: I1212 14:11:08.529502 5108 scope.go:117] "RemoveContainer" containerID="d34cfd58fd741825b24e70e447e91b8210e7201e6a9cff4b19a34fc5e0787e47" Dec 12 14:11:08 crc kubenswrapper[5108]: E1212 14:11:08.529745 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 12 14:11:08 crc kubenswrapper[5108]: E1212 14:11:08.536714 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18807d28e560532e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:11:08.529693486 +0000 UTC m=+21.437684645,LastTimestamp:2025-12-12 14:11:08.529693486 +0000 UTC m=+21.437684645,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:11:09 crc kubenswrapper[5108]: I1212 14:11:09.340350 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 14:11:09 crc kubenswrapper[5108]: I1212 14:11:09.530644 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Dec 12 14:11:09 crc kubenswrapper[5108]: I1212 14:11:09.532426 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:11:09 crc kubenswrapper[5108]: I1212 14:11:09.533099 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:09 crc kubenswrapper[5108]: I1212 14:11:09.533145 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:09 crc kubenswrapper[5108]: I1212 14:11:09.533164 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:09 crc kubenswrapper[5108]: E1212 14:11:09.533582 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 14:11:09 crc kubenswrapper[5108]: I1212 14:11:09.533868 5108 scope.go:117] "RemoveContainer" containerID="d34cfd58fd741825b24e70e447e91b8210e7201e6a9cff4b19a34fc5e0787e47" Dec 12 14:11:09 crc kubenswrapper[5108]: E1212 14:11:09.534110 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 12 14:11:09 crc kubenswrapper[5108]: E1212 14:11:09.539208 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.18807d28e560532e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18807d28e560532e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:11:08.529693486 +0000 UTC m=+21.437684645,LastTimestamp:2025-12-12 14:11:09.534053459 +0000 UTC m=+22.442044618,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:11:09 crc kubenswrapper[5108]: E1212 14:11:09.965809 5108 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Dec 12 14:11:10 crc kubenswrapper[5108]: I1212 14:11:10.185062 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:11:10 crc kubenswrapper[5108]: I1212 14:11:10.186028 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:10 crc kubenswrapper[5108]: I1212 14:11:10.186072 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:10 crc kubenswrapper[5108]: I1212 14:11:10.186095 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:10 crc kubenswrapper[5108]: I1212 14:11:10.186121 5108 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 12 14:11:10 crc kubenswrapper[5108]: E1212 14:11:10.194911 5108 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 12 14:11:10 crc kubenswrapper[5108]: I1212 14:11:10.341056 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 14:11:11 crc kubenswrapper[5108]: I1212 14:11:11.342269 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 14:11:12 crc kubenswrapper[5108]: E1212 14:11:12.243282 5108 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 12 14:11:12 crc kubenswrapper[5108]: I1212 14:11:12.339425 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 14:11:12 crc kubenswrapper[5108]: E1212 14:11:12.344275 5108 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 12 14:11:12 crc kubenswrapper[5108]: I1212 14:11:12.410461 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 14:11:12 crc kubenswrapper[5108]: I1212 14:11:12.410690 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:11:12 crc kubenswrapper[5108]: I1212 14:11:12.411478 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:12 crc kubenswrapper[5108]: I1212 14:11:12.411520 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:12 crc kubenswrapper[5108]: I1212 14:11:12.411535 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:12 crc kubenswrapper[5108]: E1212 14:11:12.411861 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 14:11:12 crc kubenswrapper[5108]: I1212 14:11:12.416453 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 14:11:12 crc kubenswrapper[5108]: I1212 14:11:12.539171 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:11:12 crc kubenswrapper[5108]: I1212 14:11:12.539712 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:12 crc kubenswrapper[5108]: I1212 14:11:12.539751 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:12 crc kubenswrapper[5108]: I1212 14:11:12.539761 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:12 crc kubenswrapper[5108]: E1212 14:11:12.540104 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 14:11:12 crc kubenswrapper[5108]: E1212 14:11:12.896622 5108 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 12 14:11:13 crc kubenswrapper[5108]: I1212 14:11:13.343850 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 14:11:14 crc kubenswrapper[5108]: I1212 14:11:14.343703 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 14:11:15 crc kubenswrapper[5108]: I1212 14:11:15.342174 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 14:11:16 crc kubenswrapper[5108]: I1212 14:11:16.340656 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 14:11:16 crc kubenswrapper[5108]: E1212 14:11:16.971804 5108 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Dec 12 14:11:17 crc kubenswrapper[5108]: I1212 14:11:17.195224 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:11:17 crc kubenswrapper[5108]: I1212 14:11:17.196354 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:17 crc kubenswrapper[5108]: I1212 14:11:17.196435 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:17 crc kubenswrapper[5108]: I1212 14:11:17.196456 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:17 crc kubenswrapper[5108]: I1212 14:11:17.196481 5108 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 12 14:11:17 crc kubenswrapper[5108]: E1212 14:11:17.207871 5108 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 12 14:11:17 crc kubenswrapper[5108]: I1212 14:11:17.340501 5108 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 14:11:17 crc kubenswrapper[5108]: I1212 14:11:17.340704 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:11:17 crc kubenswrapper[5108]: I1212 14:11:17.340748 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 14:11:17 crc kubenswrapper[5108]: I1212 14:11:17.341443 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:17 crc kubenswrapper[5108]: I1212 14:11:17.341469 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:17 crc kubenswrapper[5108]: I1212 14:11:17.341478 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:17 crc kubenswrapper[5108]: E1212 14:11:17.341751 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 14:11:17 crc kubenswrapper[5108]: I1212 14:11:17.341971 5108 scope.go:117] "RemoveContainer" containerID="d34cfd58fd741825b24e70e447e91b8210e7201e6a9cff4b19a34fc5e0787e47" Dec 12 14:11:17 crc kubenswrapper[5108]: E1212 14:11:17.342148 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 12 14:11:17 crc kubenswrapper[5108]: E1212 14:11:17.347678 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.18807d28e560532e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18807d28e560532e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:11:08.529693486 +0000 UTC m=+21.437684645,LastTimestamp:2025-12-12 14:11:17.342129248 +0000 UTC m=+30.250120397,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:11:17 crc kubenswrapper[5108]: E1212 14:11:17.463933 5108 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 12 14:11:17 crc kubenswrapper[5108]: I1212 14:11:17.520409 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 14:11:17 crc kubenswrapper[5108]: I1212 14:11:17.550451 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:11:17 crc kubenswrapper[5108]: I1212 14:11:17.551528 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:17 crc kubenswrapper[5108]: I1212 14:11:17.551629 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:17 crc kubenswrapper[5108]: I1212 14:11:17.551659 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:17 crc kubenswrapper[5108]: E1212 14:11:17.552478 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 14:11:17 crc kubenswrapper[5108]: I1212 14:11:17.552999 5108 scope.go:117] "RemoveContainer" containerID="d34cfd58fd741825b24e70e447e91b8210e7201e6a9cff4b19a34fc5e0787e47" Dec 12 14:11:17 crc kubenswrapper[5108]: E1212 14:11:17.553439 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 12 14:11:17 crc kubenswrapper[5108]: E1212 14:11:17.563041 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.18807d28e560532e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18807d28e560532e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:11:08.529693486 +0000 UTC m=+21.437684645,LastTimestamp:2025-12-12 14:11:17.553367926 +0000 UTC m=+30.461359125,Count:4,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:11:18 crc kubenswrapper[5108]: I1212 14:11:18.341965 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 14:11:18 crc kubenswrapper[5108]: E1212 14:11:18.904634 5108 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 12 14:11:19 crc kubenswrapper[5108]: I1212 14:11:19.340774 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 14:11:20 crc kubenswrapper[5108]: I1212 14:11:20.340819 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 14:11:21 crc kubenswrapper[5108]: I1212 14:11:21.342948 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 14:11:22 crc kubenswrapper[5108]: I1212 14:11:22.341882 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 14:11:23 crc kubenswrapper[5108]: I1212 14:11:23.340720 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 14:11:23 crc kubenswrapper[5108]: E1212 14:11:23.973453 5108 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Dec 12 14:11:24 crc kubenswrapper[5108]: I1212 14:11:24.208130 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:11:24 crc kubenswrapper[5108]: I1212 14:11:24.209975 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:24 crc kubenswrapper[5108]: I1212 14:11:24.210016 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:24 crc kubenswrapper[5108]: I1212 14:11:24.210027 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:24 crc kubenswrapper[5108]: I1212 14:11:24.210050 5108 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 12 14:11:24 crc kubenswrapper[5108]: E1212 14:11:24.223500 5108 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 12 14:11:24 crc kubenswrapper[5108]: I1212 14:11:24.343344 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 14:11:25 crc kubenswrapper[5108]: I1212 14:11:25.341322 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 14:11:26 crc kubenswrapper[5108]: I1212 14:11:26.340292 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 14:11:27 crc kubenswrapper[5108]: I1212 14:11:27.341716 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 14:11:27 crc kubenswrapper[5108]: E1212 14:11:27.464723 5108 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 12 14:11:28 crc kubenswrapper[5108]: I1212 14:11:28.337113 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 14:11:28 crc kubenswrapper[5108]: E1212 14:11:28.529141 5108 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 12 14:11:29 crc kubenswrapper[5108]: I1212 14:11:29.344824 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 14:11:29 crc kubenswrapper[5108]: I1212 14:11:29.407126 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:11:29 crc kubenswrapper[5108]: I1212 14:11:29.408437 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:29 crc kubenswrapper[5108]: I1212 14:11:29.408672 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:29 crc kubenswrapper[5108]: I1212 14:11:29.408833 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:29 crc kubenswrapper[5108]: E1212 14:11:29.409554 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 14:11:29 crc kubenswrapper[5108]: I1212 14:11:29.410129 5108 scope.go:117] "RemoveContainer" containerID="d34cfd58fd741825b24e70e447e91b8210e7201e6a9cff4b19a34fc5e0787e47" Dec 12 14:11:29 crc kubenswrapper[5108]: E1212 14:11:29.416655 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.18807d24a7e58bd3\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18807d24a7e58bd3 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:50.318367699 +0000 UTC m=+3.226358858,LastTimestamp:2025-12-12 14:11:29.411461789 +0000 UTC m=+42.319452949,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:11:29 crc kubenswrapper[5108]: E1212 14:11:29.611999 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.18807d24b48e664f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18807d24b48e664f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:50.530760271 +0000 UTC m=+3.438751430,LastTimestamp:2025-12-12 14:11:29.605178008 +0000 UTC m=+42.513169187,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:11:29 crc kubenswrapper[5108]: E1212 14:11:29.624171 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.18807d24b587e092\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18807d24b587e092 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:50.547110034 +0000 UTC m=+3.455101193,LastTimestamp:2025-12-12 14:11:29.619694758 +0000 UTC m=+42.527685917,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:11:30 crc kubenswrapper[5108]: I1212 14:11:30.342524 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 14:11:30 crc kubenswrapper[5108]: I1212 14:11:30.581299 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Dec 12 14:11:30 crc kubenswrapper[5108]: I1212 14:11:30.583479 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"e5d0d1c1fc61ce6a52b3dbffd2318eb8d5d244ba24d928bc077a48f797972932"} Dec 12 14:11:30 crc kubenswrapper[5108]: I1212 14:11:30.583705 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:11:30 crc kubenswrapper[5108]: I1212 14:11:30.585337 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:30 crc kubenswrapper[5108]: I1212 14:11:30.585411 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:30 crc kubenswrapper[5108]: I1212 14:11:30.585425 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:30 crc kubenswrapper[5108]: E1212 14:11:30.586146 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 14:11:30 crc kubenswrapper[5108]: E1212 14:11:30.980198 5108 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Dec 12 14:11:31 crc kubenswrapper[5108]: I1212 14:11:31.224287 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:11:31 crc kubenswrapper[5108]: I1212 14:11:31.225266 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:31 crc kubenswrapper[5108]: I1212 14:11:31.225320 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:31 crc kubenswrapper[5108]: I1212 14:11:31.225334 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:31 crc kubenswrapper[5108]: I1212 14:11:31.225361 5108 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 12 14:11:31 crc kubenswrapper[5108]: E1212 14:11:31.238303 5108 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 12 14:11:31 crc kubenswrapper[5108]: I1212 14:11:31.343419 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 14:11:31 crc kubenswrapper[5108]: I1212 14:11:31.586996 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Dec 12 14:11:31 crc kubenswrapper[5108]: I1212 14:11:31.587789 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Dec 12 14:11:31 crc kubenswrapper[5108]: I1212 14:11:31.589588 5108 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="e5d0d1c1fc61ce6a52b3dbffd2318eb8d5d244ba24d928bc077a48f797972932" exitCode=255 Dec 12 14:11:31 crc kubenswrapper[5108]: I1212 14:11:31.589651 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"e5d0d1c1fc61ce6a52b3dbffd2318eb8d5d244ba24d928bc077a48f797972932"} Dec 12 14:11:31 crc kubenswrapper[5108]: I1212 14:11:31.589694 5108 scope.go:117] "RemoveContainer" containerID="d34cfd58fd741825b24e70e447e91b8210e7201e6a9cff4b19a34fc5e0787e47" Dec 12 14:11:31 crc kubenswrapper[5108]: I1212 14:11:31.590000 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:11:31 crc kubenswrapper[5108]: I1212 14:11:31.590765 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:31 crc kubenswrapper[5108]: I1212 14:11:31.590797 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:31 crc kubenswrapper[5108]: I1212 14:11:31.590808 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:31 crc kubenswrapper[5108]: E1212 14:11:31.591278 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 14:11:31 crc kubenswrapper[5108]: I1212 14:11:31.591519 5108 scope.go:117] "RemoveContainer" containerID="e5d0d1c1fc61ce6a52b3dbffd2318eb8d5d244ba24d928bc077a48f797972932" Dec 12 14:11:31 crc kubenswrapper[5108]: E1212 14:11:31.591756 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 12 14:11:31 crc kubenswrapper[5108]: E1212 14:11:31.600643 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.18807d28e560532e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18807d28e560532e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:11:08.529693486 +0000 UTC m=+21.437684645,LastTimestamp:2025-12-12 14:11:31.591731198 +0000 UTC m=+44.499722367,Count:5,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:11:32 crc kubenswrapper[5108]: E1212 14:11:32.030919 5108 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 12 14:11:32 crc kubenswrapper[5108]: I1212 14:11:32.344921 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 14:11:32 crc kubenswrapper[5108]: I1212 14:11:32.595347 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Dec 12 14:11:33 crc kubenswrapper[5108]: I1212 14:11:33.340529 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 14:11:34 crc kubenswrapper[5108]: I1212 14:11:34.135692 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 12 14:11:34 crc kubenswrapper[5108]: I1212 14:11:34.135875 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:11:34 crc kubenswrapper[5108]: I1212 14:11:34.136509 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:34 crc kubenswrapper[5108]: I1212 14:11:34.136538 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:34 crc kubenswrapper[5108]: I1212 14:11:34.136550 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:34 crc kubenswrapper[5108]: E1212 14:11:34.136825 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 14:11:34 crc kubenswrapper[5108]: I1212 14:11:34.343332 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 14:11:35 crc kubenswrapper[5108]: I1212 14:11:35.344063 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 14:11:36 crc kubenswrapper[5108]: I1212 14:11:36.343800 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 14:11:37 crc kubenswrapper[5108]: I1212 14:11:37.340315 5108 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 14:11:37 crc kubenswrapper[5108]: I1212 14:11:37.340420 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 14:11:37 crc kubenswrapper[5108]: I1212 14:11:37.340890 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:11:37 crc kubenswrapper[5108]: I1212 14:11:37.341912 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:37 crc kubenswrapper[5108]: I1212 14:11:37.341953 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:37 crc kubenswrapper[5108]: I1212 14:11:37.341963 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:37 crc kubenswrapper[5108]: E1212 14:11:37.342311 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 14:11:37 crc kubenswrapper[5108]: I1212 14:11:37.342564 5108 scope.go:117] "RemoveContainer" containerID="e5d0d1c1fc61ce6a52b3dbffd2318eb8d5d244ba24d928bc077a48f797972932" Dec 12 14:11:37 crc kubenswrapper[5108]: E1212 14:11:37.342745 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 12 14:11:37 crc kubenswrapper[5108]: E1212 14:11:37.349994 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.18807d28e560532e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18807d28e560532e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:11:08.529693486 +0000 UTC m=+21.437684645,LastTimestamp:2025-12-12 14:11:37.342717376 +0000 UTC m=+50.250708535,Count:6,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:11:37 crc kubenswrapper[5108]: E1212 14:11:37.465549 5108 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 12 14:11:37 crc kubenswrapper[5108]: E1212 14:11:37.805062 5108 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 12 14:11:37 crc kubenswrapper[5108]: E1212 14:11:37.986269 5108 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Dec 12 14:11:38 crc kubenswrapper[5108]: I1212 14:11:38.238742 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:11:38 crc kubenswrapper[5108]: I1212 14:11:38.240122 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:38 crc kubenswrapper[5108]: I1212 14:11:38.240162 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:38 crc kubenswrapper[5108]: I1212 14:11:38.240173 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:38 crc kubenswrapper[5108]: I1212 14:11:38.240198 5108 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 12 14:11:38 crc kubenswrapper[5108]: E1212 14:11:38.257197 5108 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 12 14:11:38 crc kubenswrapper[5108]: I1212 14:11:38.343407 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 14:11:39 crc kubenswrapper[5108]: I1212 14:11:39.339718 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 14:11:40 crc kubenswrapper[5108]: I1212 14:11:40.341727 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 14:11:40 crc kubenswrapper[5108]: I1212 14:11:40.584753 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 14:11:40 crc kubenswrapper[5108]: I1212 14:11:40.585238 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:11:40 crc kubenswrapper[5108]: I1212 14:11:40.586337 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:40 crc kubenswrapper[5108]: I1212 14:11:40.586523 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:40 crc kubenswrapper[5108]: I1212 14:11:40.586624 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:40 crc kubenswrapper[5108]: E1212 14:11:40.587132 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 14:11:40 crc kubenswrapper[5108]: I1212 14:11:40.587462 5108 scope.go:117] "RemoveContainer" containerID="e5d0d1c1fc61ce6a52b3dbffd2318eb8d5d244ba24d928bc077a48f797972932" Dec 12 14:11:40 crc kubenswrapper[5108]: E1212 14:11:40.587729 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 12 14:11:40 crc kubenswrapper[5108]: E1212 14:11:40.596190 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.18807d28e560532e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18807d28e560532e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:11:08.529693486 +0000 UTC m=+21.437684645,LastTimestamp:2025-12-12 14:11:40.587693984 +0000 UTC m=+53.495685143,Count:7,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:11:41 crc kubenswrapper[5108]: I1212 14:11:41.342067 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 14:11:41 crc kubenswrapper[5108]: E1212 14:11:41.556601 5108 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 12 14:11:42 crc kubenswrapper[5108]: I1212 14:11:42.343182 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 14:11:43 crc kubenswrapper[5108]: I1212 14:11:43.343673 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 14:11:44 crc kubenswrapper[5108]: I1212 14:11:44.340831 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 14:11:44 crc kubenswrapper[5108]: E1212 14:11:44.992497 5108 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Dec 12 14:11:45 crc kubenswrapper[5108]: I1212 14:11:45.258621 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:11:45 crc kubenswrapper[5108]: I1212 14:11:45.260038 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:45 crc kubenswrapper[5108]: I1212 14:11:45.260124 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:45 crc kubenswrapper[5108]: I1212 14:11:45.260140 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:45 crc kubenswrapper[5108]: I1212 14:11:45.260170 5108 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 12 14:11:45 crc kubenswrapper[5108]: E1212 14:11:45.270119 5108 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 12 14:11:45 crc kubenswrapper[5108]: I1212 14:11:45.340896 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 14:11:46 crc kubenswrapper[5108]: I1212 14:11:46.343588 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 14:11:47 crc kubenswrapper[5108]: I1212 14:11:47.341816 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 14:11:47 crc kubenswrapper[5108]: E1212 14:11:47.467228 5108 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 12 14:11:48 crc kubenswrapper[5108]: I1212 14:11:48.340841 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 14:11:49 crc kubenswrapper[5108]: I1212 14:11:49.341577 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 14:11:50 crc kubenswrapper[5108]: I1212 14:11:50.343567 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 14:11:51 crc kubenswrapper[5108]: I1212 14:11:51.340581 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 14:11:51 crc kubenswrapper[5108]: E1212 14:11:51.997951 5108 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Dec 12 14:11:52 crc kubenswrapper[5108]: I1212 14:11:52.270830 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:11:52 crc kubenswrapper[5108]: I1212 14:11:52.271874 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:52 crc kubenswrapper[5108]: I1212 14:11:52.271916 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:52 crc kubenswrapper[5108]: I1212 14:11:52.271928 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:52 crc kubenswrapper[5108]: I1212 14:11:52.271955 5108 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 12 14:11:52 crc kubenswrapper[5108]: E1212 14:11:52.282503 5108 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 12 14:11:52 crc kubenswrapper[5108]: I1212 14:11:52.340521 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 14:11:52 crc kubenswrapper[5108]: I1212 14:11:52.389450 5108 csr.go:274] "Certificate signing request is approved, waiting to be issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-jqwh8" Dec 12 14:11:52 crc kubenswrapper[5108]: I1212 14:11:52.395272 5108 csr.go:270] "Certificate signing request is issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-jqwh8" Dec 12 14:11:52 crc kubenswrapper[5108]: I1212 14:11:52.456782 5108 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Dec 12 14:11:53 crc kubenswrapper[5108]: I1212 14:11:53.262737 5108 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Dec 12 14:11:53 crc kubenswrapper[5108]: I1212 14:11:53.397375 5108 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kube-apiserver-client-kubelet" expiration="2026-01-11 14:06:52 +0000 UTC" deadline="2026-01-04 09:44:09.190048779 +0000 UTC" Dec 12 14:11:53 crc kubenswrapper[5108]: I1212 14:11:53.397444 5108 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kube-apiserver-client-kubelet" sleep="547h32m15.792609618s" Dec 12 14:11:53 crc kubenswrapper[5108]: I1212 14:11:53.406893 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:11:53 crc kubenswrapper[5108]: I1212 14:11:53.407716 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:53 crc kubenswrapper[5108]: I1212 14:11:53.407760 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:53 crc kubenswrapper[5108]: I1212 14:11:53.407773 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:53 crc kubenswrapper[5108]: E1212 14:11:53.408238 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 14:11:53 crc kubenswrapper[5108]: I1212 14:11:53.408533 5108 scope.go:117] "RemoveContainer" containerID="e5d0d1c1fc61ce6a52b3dbffd2318eb8d5d244ba24d928bc077a48f797972932" Dec 12 14:11:53 crc kubenswrapper[5108]: I1212 14:11:53.651345 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Dec 12 14:11:53 crc kubenswrapper[5108]: I1212 14:11:53.652668 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"efef1b5c827deceb497f177cb13ac091fbaccfce00faad7e3daee74ab981b6b9"} Dec 12 14:11:53 crc kubenswrapper[5108]: I1212 14:11:53.652991 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:11:53 crc kubenswrapper[5108]: I1212 14:11:53.654048 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:53 crc kubenswrapper[5108]: I1212 14:11:53.654179 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:53 crc kubenswrapper[5108]: I1212 14:11:53.654288 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:53 crc kubenswrapper[5108]: E1212 14:11:53.654824 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 14:11:55 crc kubenswrapper[5108]: I1212 14:11:55.658042 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Dec 12 14:11:55 crc kubenswrapper[5108]: I1212 14:11:55.658652 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Dec 12 14:11:55 crc kubenswrapper[5108]: I1212 14:11:55.659868 5108 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="efef1b5c827deceb497f177cb13ac091fbaccfce00faad7e3daee74ab981b6b9" exitCode=255 Dec 12 14:11:55 crc kubenswrapper[5108]: I1212 14:11:55.659900 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"efef1b5c827deceb497f177cb13ac091fbaccfce00faad7e3daee74ab981b6b9"} Dec 12 14:11:55 crc kubenswrapper[5108]: I1212 14:11:55.659930 5108 scope.go:117] "RemoveContainer" containerID="e5d0d1c1fc61ce6a52b3dbffd2318eb8d5d244ba24d928bc077a48f797972932" Dec 12 14:11:55 crc kubenswrapper[5108]: I1212 14:11:55.660211 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:11:55 crc kubenswrapper[5108]: I1212 14:11:55.660912 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:55 crc kubenswrapper[5108]: I1212 14:11:55.660936 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:55 crc kubenswrapper[5108]: I1212 14:11:55.660944 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:55 crc kubenswrapper[5108]: E1212 14:11:55.661309 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 14:11:55 crc kubenswrapper[5108]: I1212 14:11:55.661521 5108 scope.go:117] "RemoveContainer" containerID="efef1b5c827deceb497f177cb13ac091fbaccfce00faad7e3daee74ab981b6b9" Dec 12 14:11:55 crc kubenswrapper[5108]: E1212 14:11:55.661720 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 12 14:11:56 crc kubenswrapper[5108]: I1212 14:11:56.664250 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Dec 12 14:11:57 crc kubenswrapper[5108]: I1212 14:11:57.340740 5108 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 14:11:57 crc kubenswrapper[5108]: I1212 14:11:57.341001 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:11:57 crc kubenswrapper[5108]: I1212 14:11:57.341798 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:57 crc kubenswrapper[5108]: I1212 14:11:57.341837 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:57 crc kubenswrapper[5108]: I1212 14:11:57.341849 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:57 crc kubenswrapper[5108]: E1212 14:11:57.342270 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 14:11:57 crc kubenswrapper[5108]: I1212 14:11:57.342527 5108 scope.go:117] "RemoveContainer" containerID="efef1b5c827deceb497f177cb13ac091fbaccfce00faad7e3daee74ab981b6b9" Dec 12 14:11:57 crc kubenswrapper[5108]: E1212 14:11:57.342751 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 12 14:11:57 crc kubenswrapper[5108]: E1212 14:11:57.468413 5108 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 12 14:11:59 crc kubenswrapper[5108]: I1212 14:11:59.282873 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:11:59 crc kubenswrapper[5108]: I1212 14:11:59.284272 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:59 crc kubenswrapper[5108]: I1212 14:11:59.284417 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:59 crc kubenswrapper[5108]: I1212 14:11:59.284432 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:59 crc kubenswrapper[5108]: I1212 14:11:59.284660 5108 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 12 14:11:59 crc kubenswrapper[5108]: I1212 14:11:59.293297 5108 kubelet_node_status.go:127] "Node was previously registered" node="crc" Dec 12 14:11:59 crc kubenswrapper[5108]: I1212 14:11:59.293623 5108 kubelet_node_status.go:81] "Successfully registered node" node="crc" Dec 12 14:11:59 crc kubenswrapper[5108]: E1212 14:11:59.293651 5108 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Dec 12 14:11:59 crc kubenswrapper[5108]: I1212 14:11:59.296844 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:59 crc kubenswrapper[5108]: I1212 14:11:59.296876 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:59 crc kubenswrapper[5108]: I1212 14:11:59.296889 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:59 crc kubenswrapper[5108]: I1212 14:11:59.296905 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:11:59 crc kubenswrapper[5108]: I1212 14:11:59.296917 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:11:59Z","lastTransitionTime":"2025-12-12T14:11:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:11:59 crc kubenswrapper[5108]: E1212 14:11:59.312249 5108 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T14:11:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T14:11:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T14:11:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T14:11:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b9ecfb5f-c09b-4d5a-8f60-9bac4baf1e5d\\\",\\\"systemUUID\\\":\\\"fba542e1-ce5a-4556-a3dc-e51e5c5391bd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 14:11:59 crc kubenswrapper[5108]: I1212 14:11:59.319625 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:59 crc kubenswrapper[5108]: I1212 14:11:59.319682 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:59 crc kubenswrapper[5108]: I1212 14:11:59.319696 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:59 crc kubenswrapper[5108]: I1212 14:11:59.319713 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:11:59 crc kubenswrapper[5108]: I1212 14:11:59.319728 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:11:59Z","lastTransitionTime":"2025-12-12T14:11:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:11:59 crc kubenswrapper[5108]: E1212 14:11:59.331743 5108 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T14:11:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T14:11:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T14:11:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T14:11:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b9ecfb5f-c09b-4d5a-8f60-9bac4baf1e5d\\\",\\\"systemUUID\\\":\\\"fba542e1-ce5a-4556-a3dc-e51e5c5391bd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 14:11:59 crc kubenswrapper[5108]: I1212 14:11:59.339525 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:59 crc kubenswrapper[5108]: I1212 14:11:59.339557 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:59 crc kubenswrapper[5108]: I1212 14:11:59.339566 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:59 crc kubenswrapper[5108]: I1212 14:11:59.339580 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:11:59 crc kubenswrapper[5108]: I1212 14:11:59.339590 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:11:59Z","lastTransitionTime":"2025-12-12T14:11:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:11:59 crc kubenswrapper[5108]: E1212 14:11:59.347851 5108 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T14:11:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T14:11:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T14:11:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T14:11:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b9ecfb5f-c09b-4d5a-8f60-9bac4baf1e5d\\\",\\\"systemUUID\\\":\\\"fba542e1-ce5a-4556-a3dc-e51e5c5391bd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 14:11:59 crc kubenswrapper[5108]: I1212 14:11:59.353504 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:59 crc kubenswrapper[5108]: I1212 14:11:59.353632 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:59 crc kubenswrapper[5108]: I1212 14:11:59.353708 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:59 crc kubenswrapper[5108]: I1212 14:11:59.353774 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:11:59 crc kubenswrapper[5108]: I1212 14:11:59.353838 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:11:59Z","lastTransitionTime":"2025-12-12T14:11:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:11:59 crc kubenswrapper[5108]: E1212 14:11:59.361317 5108 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T14:11:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T14:11:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T14:11:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T14:11:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b9ecfb5f-c09b-4d5a-8f60-9bac4baf1e5d\\\",\\\"systemUUID\\\":\\\"fba542e1-ce5a-4556-a3dc-e51e5c5391bd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 14:11:59 crc kubenswrapper[5108]: E1212 14:11:59.361746 5108 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Dec 12 14:11:59 crc kubenswrapper[5108]: E1212 14:11:59.361841 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:59 crc kubenswrapper[5108]: E1212 14:11:59.462844 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:59 crc kubenswrapper[5108]: E1212 14:11:59.563306 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:59 crc kubenswrapper[5108]: E1212 14:11:59.664109 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:59 crc kubenswrapper[5108]: E1212 14:11:59.764797 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:59 crc kubenswrapper[5108]: E1212 14:11:59.865495 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:59 crc kubenswrapper[5108]: E1212 14:11:59.965873 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:00 crc kubenswrapper[5108]: E1212 14:12:00.066925 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:00 crc kubenswrapper[5108]: E1212 14:12:00.168048 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:00 crc kubenswrapper[5108]: E1212 14:12:00.269339 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:00 crc kubenswrapper[5108]: E1212 14:12:00.370040 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:00 crc kubenswrapper[5108]: E1212 14:12:00.471616 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:00 crc kubenswrapper[5108]: E1212 14:12:00.572145 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:00 crc kubenswrapper[5108]: E1212 14:12:00.672692 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:00 crc kubenswrapper[5108]: E1212 14:12:00.774037 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:00 crc kubenswrapper[5108]: E1212 14:12:00.875095 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:00 crc kubenswrapper[5108]: E1212 14:12:00.975963 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:01 crc kubenswrapper[5108]: E1212 14:12:01.077111 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:01 crc kubenswrapper[5108]: E1212 14:12:01.177321 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:01 crc kubenswrapper[5108]: E1212 14:12:01.278332 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:01 crc kubenswrapper[5108]: E1212 14:12:01.378829 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:01 crc kubenswrapper[5108]: E1212 14:12:01.479949 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:01 crc kubenswrapper[5108]: E1212 14:12:01.580068 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:01 crc kubenswrapper[5108]: E1212 14:12:01.681061 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:01 crc kubenswrapper[5108]: E1212 14:12:01.782138 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:01 crc kubenswrapper[5108]: E1212 14:12:01.882733 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:01 crc kubenswrapper[5108]: E1212 14:12:01.983392 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:02 crc kubenswrapper[5108]: E1212 14:12:02.084097 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:02 crc kubenswrapper[5108]: E1212 14:12:02.184460 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:02 crc kubenswrapper[5108]: E1212 14:12:02.285239 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:02 crc kubenswrapper[5108]: E1212 14:12:02.385475 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:02 crc kubenswrapper[5108]: E1212 14:12:02.486123 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:02 crc kubenswrapper[5108]: E1212 14:12:02.586306 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:02 crc kubenswrapper[5108]: E1212 14:12:02.686771 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:02 crc kubenswrapper[5108]: E1212 14:12:02.787580 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:02 crc kubenswrapper[5108]: E1212 14:12:02.888457 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:02 crc kubenswrapper[5108]: E1212 14:12:02.989489 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:03 crc kubenswrapper[5108]: E1212 14:12:03.090232 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:03 crc kubenswrapper[5108]: E1212 14:12:03.190390 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:03 crc kubenswrapper[5108]: E1212 14:12:03.291383 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:03 crc kubenswrapper[5108]: E1212 14:12:03.392577 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:03 crc kubenswrapper[5108]: E1212 14:12:03.493269 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:03 crc kubenswrapper[5108]: E1212 14:12:03.594013 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:03 crc kubenswrapper[5108]: I1212 14:12:03.654261 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 14:12:03 crc kubenswrapper[5108]: I1212 14:12:03.654657 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:12:03 crc kubenswrapper[5108]: I1212 14:12:03.655659 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:12:03 crc kubenswrapper[5108]: I1212 14:12:03.655836 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:12:03 crc kubenswrapper[5108]: I1212 14:12:03.655894 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:12:03 crc kubenswrapper[5108]: E1212 14:12:03.656560 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 14:12:03 crc kubenswrapper[5108]: I1212 14:12:03.656872 5108 scope.go:117] "RemoveContainer" containerID="efef1b5c827deceb497f177cb13ac091fbaccfce00faad7e3daee74ab981b6b9" Dec 12 14:12:03 crc kubenswrapper[5108]: E1212 14:12:03.657229 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 12 14:12:03 crc kubenswrapper[5108]: E1212 14:12:03.694409 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:03 crc kubenswrapper[5108]: E1212 14:12:03.794753 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:03 crc kubenswrapper[5108]: E1212 14:12:03.895290 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:03 crc kubenswrapper[5108]: E1212 14:12:03.995790 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:04 crc kubenswrapper[5108]: E1212 14:12:04.096599 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:04 crc kubenswrapper[5108]: E1212 14:12:04.197236 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:04 crc kubenswrapper[5108]: E1212 14:12:04.297436 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:04 crc kubenswrapper[5108]: E1212 14:12:04.398219 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:04 crc kubenswrapper[5108]: E1212 14:12:04.498431 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:04 crc kubenswrapper[5108]: E1212 14:12:04.598796 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:04 crc kubenswrapper[5108]: E1212 14:12:04.699366 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:04 crc kubenswrapper[5108]: E1212 14:12:04.799886 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:04 crc kubenswrapper[5108]: E1212 14:12:04.900892 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:05 crc kubenswrapper[5108]: E1212 14:12:05.001957 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:05 crc kubenswrapper[5108]: E1212 14:12:05.102323 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:05 crc kubenswrapper[5108]: E1212 14:12:05.202973 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:05 crc kubenswrapper[5108]: E1212 14:12:05.303630 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:05 crc kubenswrapper[5108]: E1212 14:12:05.403808 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:05 crc kubenswrapper[5108]: I1212 14:12:05.407410 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:12:05 crc kubenswrapper[5108]: I1212 14:12:05.408504 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:12:05 crc kubenswrapper[5108]: I1212 14:12:05.408552 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:12:05 crc kubenswrapper[5108]: I1212 14:12:05.408570 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:12:05 crc kubenswrapper[5108]: E1212 14:12:05.409237 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 14:12:05 crc kubenswrapper[5108]: E1212 14:12:05.503946 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:05 crc kubenswrapper[5108]: E1212 14:12:05.604357 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:05 crc kubenswrapper[5108]: E1212 14:12:05.705040 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:05 crc kubenswrapper[5108]: E1212 14:12:05.806216 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:05 crc kubenswrapper[5108]: E1212 14:12:05.907357 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:06 crc kubenswrapper[5108]: E1212 14:12:06.007543 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:06 crc kubenswrapper[5108]: E1212 14:12:06.108191 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:06 crc kubenswrapper[5108]: E1212 14:12:06.208927 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:06 crc kubenswrapper[5108]: E1212 14:12:06.309430 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:06 crc kubenswrapper[5108]: E1212 14:12:06.410170 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:06 crc kubenswrapper[5108]: E1212 14:12:06.510617 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:06 crc kubenswrapper[5108]: E1212 14:12:06.611057 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:06 crc kubenswrapper[5108]: E1212 14:12:06.712407 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:06 crc kubenswrapper[5108]: E1212 14:12:06.812583 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:06 crc kubenswrapper[5108]: E1212 14:12:06.912985 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:07 crc kubenswrapper[5108]: E1212 14:12:07.013294 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:07 crc kubenswrapper[5108]: E1212 14:12:07.113727 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:07 crc kubenswrapper[5108]: E1212 14:12:07.214238 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:07 crc kubenswrapper[5108]: E1212 14:12:07.314921 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:07 crc kubenswrapper[5108]: E1212 14:12:07.415223 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:07 crc kubenswrapper[5108]: E1212 14:12:07.469361 5108 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 12 14:12:07 crc kubenswrapper[5108]: E1212 14:12:07.516462 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:07 crc kubenswrapper[5108]: E1212 14:12:07.617006 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:07 crc kubenswrapper[5108]: E1212 14:12:07.717590 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:07 crc kubenswrapper[5108]: E1212 14:12:07.818881 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:07 crc kubenswrapper[5108]: E1212 14:12:07.920318 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:08 crc kubenswrapper[5108]: E1212 14:12:08.021728 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:08 crc kubenswrapper[5108]: E1212 14:12:08.122429 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:08 crc kubenswrapper[5108]: E1212 14:12:08.222983 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:08 crc kubenswrapper[5108]: E1212 14:12:08.323553 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:08 crc kubenswrapper[5108]: E1212 14:12:08.423692 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:08 crc kubenswrapper[5108]: E1212 14:12:08.524138 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:08 crc kubenswrapper[5108]: I1212 14:12:08.621913 5108 reflector.go:430] "Caches populated" type="*v1.Service" reflector="k8s.io/client-go/informers/factory.go:160" Dec 12 14:12:08 crc kubenswrapper[5108]: E1212 14:12:08.624274 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:08 crc kubenswrapper[5108]: E1212 14:12:08.724737 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:08 crc kubenswrapper[5108]: E1212 14:12:08.825400 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:08 crc kubenswrapper[5108]: E1212 14:12:08.925778 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:09 crc kubenswrapper[5108]: E1212 14:12:09.025935 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:09 crc kubenswrapper[5108]: E1212 14:12:09.126629 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:09 crc kubenswrapper[5108]: E1212 14:12:09.227446 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:09 crc kubenswrapper[5108]: E1212 14:12:09.328037 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:09 crc kubenswrapper[5108]: E1212 14:12:09.429043 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:09 crc kubenswrapper[5108]: E1212 14:12:09.452183 5108 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Dec 12 14:12:09 crc kubenswrapper[5108]: I1212 14:12:09.456683 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:12:09 crc kubenswrapper[5108]: I1212 14:12:09.456767 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:12:09 crc kubenswrapper[5108]: I1212 14:12:09.456794 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:12:09 crc kubenswrapper[5108]: I1212 14:12:09.456824 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:12:09 crc kubenswrapper[5108]: I1212 14:12:09.456846 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:12:09Z","lastTransitionTime":"2025-12-12T14:12:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:12:09 crc kubenswrapper[5108]: E1212 14:12:09.474467 5108 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T14:12:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T14:12:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T14:12:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T14:12:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T14:12:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T14:12:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T14:12:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T14:12:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b9ecfb5f-c09b-4d5a-8f60-9bac4baf1e5d\\\",\\\"systemUUID\\\":\\\"fba542e1-ce5a-4556-a3dc-e51e5c5391bd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 14:12:09 crc kubenswrapper[5108]: I1212 14:12:09.478701 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:12:09 crc kubenswrapper[5108]: I1212 14:12:09.478857 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:12:09 crc kubenswrapper[5108]: I1212 14:12:09.478881 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:12:09 crc kubenswrapper[5108]: I1212 14:12:09.478939 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:12:09 crc kubenswrapper[5108]: I1212 14:12:09.478961 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:12:09Z","lastTransitionTime":"2025-12-12T14:12:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:12:09 crc kubenswrapper[5108]: E1212 14:12:09.491154 5108 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T14:12:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T14:12:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T14:12:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T14:12:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T14:12:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T14:12:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T14:12:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T14:12:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b9ecfb5f-c09b-4d5a-8f60-9bac4baf1e5d\\\",\\\"systemUUID\\\":\\\"fba542e1-ce5a-4556-a3dc-e51e5c5391bd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 14:12:09 crc kubenswrapper[5108]: I1212 14:12:09.494535 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:12:09 crc kubenswrapper[5108]: I1212 14:12:09.494584 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:12:09 crc kubenswrapper[5108]: I1212 14:12:09.494601 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:12:09 crc kubenswrapper[5108]: I1212 14:12:09.494620 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:12:09 crc kubenswrapper[5108]: I1212 14:12:09.494635 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:12:09Z","lastTransitionTime":"2025-12-12T14:12:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:12:09 crc kubenswrapper[5108]: E1212 14:12:09.505652 5108 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T14:12:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T14:12:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T14:12:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T14:12:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T14:12:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T14:12:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T14:12:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T14:12:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b9ecfb5f-c09b-4d5a-8f60-9bac4baf1e5d\\\",\\\"systemUUID\\\":\\\"fba542e1-ce5a-4556-a3dc-e51e5c5391bd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 14:12:09 crc kubenswrapper[5108]: I1212 14:12:09.509271 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:12:09 crc kubenswrapper[5108]: I1212 14:12:09.509338 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:12:09 crc kubenswrapper[5108]: I1212 14:12:09.509355 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:12:09 crc kubenswrapper[5108]: I1212 14:12:09.509373 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:12:09 crc kubenswrapper[5108]: I1212 14:12:09.509385 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:12:09Z","lastTransitionTime":"2025-12-12T14:12:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:12:09 crc kubenswrapper[5108]: E1212 14:12:09.521974 5108 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T14:12:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T14:12:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T14:12:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T14:12:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T14:12:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T14:12:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T14:12:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T14:12:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b9ecfb5f-c09b-4d5a-8f60-9bac4baf1e5d\\\",\\\"systemUUID\\\":\\\"fba542e1-ce5a-4556-a3dc-e51e5c5391bd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 14:12:09 crc kubenswrapper[5108]: E1212 14:12:09.522210 5108 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Dec 12 14:12:09 crc kubenswrapper[5108]: E1212 14:12:09.529892 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:09 crc kubenswrapper[5108]: E1212 14:12:09.631050 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:09 crc kubenswrapper[5108]: E1212 14:12:09.731666 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:09 crc kubenswrapper[5108]: E1212 14:12:09.832343 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:09 crc kubenswrapper[5108]: E1212 14:12:09.933018 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:10 crc kubenswrapper[5108]: E1212 14:12:10.034157 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:10 crc kubenswrapper[5108]: E1212 14:12:10.135264 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:10 crc kubenswrapper[5108]: E1212 14:12:10.235430 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:10 crc kubenswrapper[5108]: E1212 14:12:10.336238 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:10 crc kubenswrapper[5108]: E1212 14:12:10.436913 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:10 crc kubenswrapper[5108]: E1212 14:12:10.537418 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:10 crc kubenswrapper[5108]: E1212 14:12:10.638465 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:10 crc kubenswrapper[5108]: E1212 14:12:10.739209 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:10 crc kubenswrapper[5108]: E1212 14:12:10.839870 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:10 crc kubenswrapper[5108]: E1212 14:12:10.941016 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:11 crc kubenswrapper[5108]: E1212 14:12:11.041770 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:11 crc kubenswrapper[5108]: E1212 14:12:11.142971 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:11 crc kubenswrapper[5108]: E1212 14:12:11.243846 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:11 crc kubenswrapper[5108]: E1212 14:12:11.344566 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:11 crc kubenswrapper[5108]: E1212 14:12:11.445588 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:11 crc kubenswrapper[5108]: E1212 14:12:11.546147 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:11 crc kubenswrapper[5108]: E1212 14:12:11.646831 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:11 crc kubenswrapper[5108]: E1212 14:12:11.746999 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:11 crc kubenswrapper[5108]: E1212 14:12:11.848186 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:11 crc kubenswrapper[5108]: E1212 14:12:11.948392 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:12 crc kubenswrapper[5108]: E1212 14:12:12.049487 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:12 crc kubenswrapper[5108]: E1212 14:12:12.149727 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:12 crc kubenswrapper[5108]: E1212 14:12:12.250862 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:12 crc kubenswrapper[5108]: E1212 14:12:12.351918 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:12 crc kubenswrapper[5108]: E1212 14:12:12.452597 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:12 crc kubenswrapper[5108]: E1212 14:12:12.553053 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:12 crc kubenswrapper[5108]: E1212 14:12:12.654354 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:12 crc kubenswrapper[5108]: E1212 14:12:12.755172 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:12 crc kubenswrapper[5108]: E1212 14:12:12.856005 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:12 crc kubenswrapper[5108]: E1212 14:12:12.957159 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:13 crc kubenswrapper[5108]: E1212 14:12:13.057888 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:13 crc kubenswrapper[5108]: E1212 14:12:13.158284 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:13 crc kubenswrapper[5108]: E1212 14:12:13.258816 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:13 crc kubenswrapper[5108]: E1212 14:12:13.359491 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:13 crc kubenswrapper[5108]: E1212 14:12:13.460276 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:13 crc kubenswrapper[5108]: E1212 14:12:13.561121 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:13 crc kubenswrapper[5108]: E1212 14:12:13.661998 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:13 crc kubenswrapper[5108]: E1212 14:12:13.762239 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:13 crc kubenswrapper[5108]: E1212 14:12:13.862704 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:13 crc kubenswrapper[5108]: E1212 14:12:13.963879 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:14 crc kubenswrapper[5108]: E1212 14:12:14.064837 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:14 crc kubenswrapper[5108]: E1212 14:12:14.165466 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:14 crc kubenswrapper[5108]: E1212 14:12:14.265697 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:14 crc kubenswrapper[5108]: E1212 14:12:14.366601 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:14 crc kubenswrapper[5108]: E1212 14:12:14.466777 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:14 crc kubenswrapper[5108]: E1212 14:12:14.567424 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:14 crc kubenswrapper[5108]: E1212 14:12:14.668028 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:14 crc kubenswrapper[5108]: E1212 14:12:14.768316 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:14 crc kubenswrapper[5108]: E1212 14:12:14.868863 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:14 crc kubenswrapper[5108]: E1212 14:12:14.969033 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:15 crc kubenswrapper[5108]: E1212 14:12:15.069866 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:15 crc kubenswrapper[5108]: E1212 14:12:15.170970 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:15 crc kubenswrapper[5108]: I1212 14:12:15.267175 5108 reflector.go:430] "Caches populated" type="*v1.CSIDriver" reflector="k8s.io/client-go/informers/factory.go:160" Dec 12 14:12:15 crc kubenswrapper[5108]: E1212 14:12:15.271459 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:15 crc kubenswrapper[5108]: E1212 14:12:15.372287 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:15 crc kubenswrapper[5108]: E1212 14:12:15.473150 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:15 crc kubenswrapper[5108]: E1212 14:12:15.573892 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:15 crc kubenswrapper[5108]: E1212 14:12:15.674473 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:15 crc kubenswrapper[5108]: E1212 14:12:15.774869 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:15 crc kubenswrapper[5108]: E1212 14:12:15.875510 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:15 crc kubenswrapper[5108]: E1212 14:12:15.975853 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:16 crc kubenswrapper[5108]: E1212 14:12:16.076025 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:16 crc kubenswrapper[5108]: E1212 14:12:16.177185 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:16 crc kubenswrapper[5108]: E1212 14:12:16.277320 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:16 crc kubenswrapper[5108]: E1212 14:12:16.377697 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:16 crc kubenswrapper[5108]: E1212 14:12:16.478498 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:16 crc kubenswrapper[5108]: E1212 14:12:16.579678 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:16 crc kubenswrapper[5108]: E1212 14:12:16.680586 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:16 crc kubenswrapper[5108]: E1212 14:12:16.781359 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:16 crc kubenswrapper[5108]: E1212 14:12:16.881472 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:16 crc kubenswrapper[5108]: E1212 14:12:16.982277 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:17 crc kubenswrapper[5108]: E1212 14:12:17.082402 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:17 crc kubenswrapper[5108]: E1212 14:12:17.182945 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:17 crc kubenswrapper[5108]: E1212 14:12:17.283568 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:17 crc kubenswrapper[5108]: E1212 14:12:17.384073 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:17 crc kubenswrapper[5108]: I1212 14:12:17.406997 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:12:17 crc kubenswrapper[5108]: I1212 14:12:17.407941 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:12:17 crc kubenswrapper[5108]: I1212 14:12:17.407988 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:12:17 crc kubenswrapper[5108]: I1212 14:12:17.408002 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:12:17 crc kubenswrapper[5108]: E1212 14:12:17.408452 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 14:12:17 crc kubenswrapper[5108]: I1212 14:12:17.408676 5108 scope.go:117] "RemoveContainer" containerID="efef1b5c827deceb497f177cb13ac091fbaccfce00faad7e3daee74ab981b6b9" Dec 12 14:12:17 crc kubenswrapper[5108]: E1212 14:12:17.408891 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 12 14:12:17 crc kubenswrapper[5108]: E1212 14:12:17.470262 5108 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 12 14:12:17 crc kubenswrapper[5108]: E1212 14:12:17.484490 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:17 crc kubenswrapper[5108]: E1212 14:12:17.585278 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:17 crc kubenswrapper[5108]: E1212 14:12:17.685682 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:17 crc kubenswrapper[5108]: E1212 14:12:17.786061 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:17 crc kubenswrapper[5108]: E1212 14:12:17.886644 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:17 crc kubenswrapper[5108]: E1212 14:12:17.987154 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:18 crc kubenswrapper[5108]: E1212 14:12:18.087759 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:18 crc kubenswrapper[5108]: E1212 14:12:18.188891 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:18 crc kubenswrapper[5108]: E1212 14:12:18.289059 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:18 crc kubenswrapper[5108]: E1212 14:12:18.390169 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:18 crc kubenswrapper[5108]: E1212 14:12:18.490573 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:18 crc kubenswrapper[5108]: E1212 14:12:18.591621 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:18 crc kubenswrapper[5108]: E1212 14:12:18.692316 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:18 crc kubenswrapper[5108]: E1212 14:12:18.792454 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:18 crc kubenswrapper[5108]: E1212 14:12:18.893140 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:18 crc kubenswrapper[5108]: E1212 14:12:18.993638 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:19 crc kubenswrapper[5108]: E1212 14:12:19.094899 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:19 crc kubenswrapper[5108]: E1212 14:12:19.196158 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:19 crc kubenswrapper[5108]: E1212 14:12:19.297019 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.365340 5108 reflector.go:430] "Caches populated" type="*v1.Node" reflector="k8s.io/client-go/informers/factory.go:160" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.381588 5108 apiserver.go:52] "Watching apiserver" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.385949 5108 reflector.go:430] "Caches populated" type="*v1.Pod" reflector="pkg/kubelet/config/apiserver.go:66" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.386701 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6","openshift-network-node-identity/network-node-identity-dgvkt","openshift-ovn-kubernetes/ovnkube-node-69wzc","openshift-image-registry/node-ca-9l7sp","openshift-multus/network-metrics-daemon-p4g92","openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5","openshift-network-diagnostics/network-check-target-fhkjl","openshift-network-operator/iptables-alerter-5jnd7","openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv","openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-wzxz4","openshift-dns/node-resolver-tgfnd","openshift-machine-config-operator/machine-config-daemon-w294k","openshift-multus/multus-additional-cni-plugins-ctxlm","openshift-multus/multus-ztpws"] Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.388064 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.389821 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"kube-root-ca.crt\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.389835 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-operator\"/\"metrics-tls\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.390323 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 12 14:12:19 crc kubenswrapper[5108]: E1212 14:12:19.390388 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.390414 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"openshift-service-ca.crt\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.391915 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 14:12:19 crc kubenswrapper[5108]: E1212 14:12:19.391971 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.394720 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.400270 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"openshift-service-ca.crt\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.400293 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"env-overrides\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.400387 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 14:12:19 crc kubenswrapper[5108]: E1212 14:12:19.400449 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.400565 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"ovnkube-identity-cm\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.400692 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-node-identity\"/\"network-node-identity-cert\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.401013 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"kube-root-ca.crt\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.401458 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.401488 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.401508 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.401538 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.401563 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:12:19Z","lastTransitionTime":"2025-12-12T14:12:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.411616 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T14:12:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T14:12:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.422157 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T14:12:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T14:12:19Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.433518 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T14:12:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T14:12:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.444108 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T14:12:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T14:12:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.448652 5108 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.455758 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T14:12:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T14:12:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.456499 5108 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-etcd/etcd-crc" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.465816 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T14:12:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T14:12:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.503372 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.503414 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.503423 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.503437 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.503447 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:12:19Z","lastTransitionTime":"2025-12-12T14:12:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.527010 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.527226 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llhc9\" (UniqueName: \"kubernetes.io/projected/23a662b0-e060-413b-a12c-6ef886171e85-kube-api-access-llhc9\") pod \"node-ca-9l7sp\" (UID: \"23a662b0-e060-413b-a12c-6ef886171e85\") " pod="openshift-image-registry/node-ca-9l7sp" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.527325 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.527367 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8nt2j\" (UniqueName: \"kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.527404 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.527452 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/23a662b0-e060-413b-a12c-6ef886171e85-serviceca\") pod \"node-ca-9l7sp\" (UID: \"23a662b0-e060-413b-a12c-6ef886171e85\") " pod="openshift-image-registry/node-ca-9l7sp" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.527489 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 12 14:12:19 crc kubenswrapper[5108]: E1212 14:12:19.527502 5108 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.527545 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 14:12:19 crc kubenswrapper[5108]: E1212 14:12:19.527622 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-12 14:12:20.027569419 +0000 UTC m=+92.935560568 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.527652 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.527693 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.527717 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.527731 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dsgwk\" (UniqueName: \"kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.527751 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.527955 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-m7xz2\" (UniqueName: \"kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 12 14:12:19 crc kubenswrapper[5108]: E1212 14:12:19.527728 5108 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 12 14:12:19 crc kubenswrapper[5108]: E1212 14:12:19.528052 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-12 14:12:20.028041412 +0000 UTC m=+92.936032571 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 12 14:12:19 crc kubenswrapper[5108]: E1212 14:12:19.527858 5108 configmap.go:193] Couldn't get configMap openshift-network-operator/iptables-alerter-script: object "openshift-network-operator"/"iptables-alerter-script" not registered Dec 12 14:12:19 crc kubenswrapper[5108]: E1212 14:12:19.528205 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script podName:428b39f5-eb1c-4f65-b7a4-eeb6e84860cc nodeName:}" failed. No retries permitted until 2025-12-12 14:12:20.028189675 +0000 UTC m=+92.936180924 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "iptables-alerter-script" (UniqueName: "kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script") pod "iptables-alerter-5jnd7" (UID: "428b39f5-eb1c-4f65-b7a4-eeb6e84860cc") : object "openshift-network-operator"/"iptables-alerter-script" not registered Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.528294 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/23a662b0-e060-413b-a12c-6ef886171e85-host\") pod \"node-ca-9l7sp\" (UID: \"23a662b0-e060-413b-a12c-6ef886171e85\") " pod="openshift-image-registry/node-ca-9l7sp" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.528357 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.528411 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.528672 5108 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.528901 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.534888 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.536949 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 12 14:12:19 crc kubenswrapper[5108]: E1212 14:12:19.543229 5108 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 12 14:12:19 crc kubenswrapper[5108]: E1212 14:12:19.543264 5108 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 12 14:12:19 crc kubenswrapper[5108]: E1212 14:12:19.543280 5108 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 14:12:19 crc kubenswrapper[5108]: E1212 14:12:19.543378 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2025-12-12 14:12:20.043355373 +0000 UTC m=+92.951346532 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 14:12:19 crc kubenswrapper[5108]: E1212 14:12:19.544387 5108 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 12 14:12:19 crc kubenswrapper[5108]: E1212 14:12:19.544432 5108 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 12 14:12:19 crc kubenswrapper[5108]: E1212 14:12:19.544458 5108 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 14:12:19 crc kubenswrapper[5108]: E1212 14:12:19.544533 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2025-12-12 14:12:20.044510494 +0000 UTC m=+92.952501753 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.550787 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nt2j\" (UniqueName: \"kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.550982 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7xz2\" (UniqueName: \"kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.558355 5108 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.566466 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.567897 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.571095 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-script\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.572193 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsgwk\" (UniqueName: \"kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.578285 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T14:12:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T14:12:19Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.592133 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-9l7sp" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.595841 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T14:12:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T14:12:19Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.596265 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.596404 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"node-ca-dockercfg-tjs74\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.596593 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"kube-root-ca.crt\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.596708 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"image-registry-certificates\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.610037 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-69wzc" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.610661 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T14:12:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T14:12:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.611069 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.611253 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.611317 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.611393 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.611479 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:12:19Z","lastTransitionTime":"2025-12-12T14:12:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.615326 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.615808 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-l2v2m\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.615849 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.615949 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.618299 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.615977 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"env-overrides\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.619006 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.620840 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p4g92" Dec 12 14:12:19 crc kubenswrapper[5108]: E1212 14:12:19.621070 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p4g92" podUID="d8c95a75-0c3b-4caa-9b09-30c6dca73e72" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.633683 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/23a662b0-e060-413b-a12c-6ef886171e85-serviceca\") pod \"node-ca-9l7sp\" (UID: \"23a662b0-e060-413b-a12c-6ef886171e85\") " pod="openshift-image-registry/node-ca-9l7sp" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.633797 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.633830 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/23a662b0-e060-413b-a12c-6ef886171e85-host\") pod \"node-ca-9l7sp\" (UID: \"23a662b0-e060-413b-a12c-6ef886171e85\") " pod="openshift-image-registry/node-ca-9l7sp" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.633871 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.633892 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-llhc9\" (UniqueName: \"kubernetes.io/projected/23a662b0-e060-413b-a12c-6ef886171e85-kube-api-access-llhc9\") pod \"node-ca-9l7sp\" (UID: \"23a662b0-e060-413b-a12c-6ef886171e85\") " pod="openshift-image-registry/node-ca-9l7sp" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.634392 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.634699 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/23a662b0-e060-413b-a12c-6ef886171e85-host\") pod \"node-ca-9l7sp\" (UID: \"23a662b0-e060-413b-a12c-6ef886171e85\") " pod="openshift-image-registry/node-ca-9l7sp" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.634813 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.635551 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/23a662b0-e060-413b-a12c-6ef886171e85-serviceca\") pod \"node-ca-9l7sp\" (UID: \"23a662b0-e060-413b-a12c-6ef886171e85\") " pod="openshift-image-registry/node-ca-9l7sp" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.638361 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-wzxz4" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.638420 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T14:12:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T14:12:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 14:12:19 crc kubenswrapper[5108]: E1212 14:12:19.638517 5108 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T14:12:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T14:12:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T14:12:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T14:12:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T14:12:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T14:12:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T14:12:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T14:12:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b9ecfb5f-c09b-4d5a-8f60-9bac4baf1e5d\\\",\\\"systemUUID\\\":\\\"fba542e1-ce5a-4556-a3dc-e51e5c5391bd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.638759 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-tgfnd" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.641747 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"node-resolver-dockercfg-tk7bt\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.642071 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-control-plane-metrics-cert\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.642112 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-w294k" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.642623 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-control-plane-dockercfg-nl8tp\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.643110 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"kube-root-ca.crt\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.643218 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"openshift-service-ca.crt\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.651205 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"openshift-service-ca.crt\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.651528 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-root-ca.crt\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.651764 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"proxy-tls\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.651986 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-rbac-proxy\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.652095 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.652956 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-ctxlm" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.653369 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.654651 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.654751 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.654816 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:12:19Z","lastTransitionTime":"2025-12-12T14:12:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.655904 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-daemon-dockercfg-w9nzh\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.656893 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"kube-root-ca.crt\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.659221 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-ztpws" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.657046 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"whereabouts-flatfile-config\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.657345 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"openshift-service-ca.crt\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.657432 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-nwglk\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.657527 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.658165 5108 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.657001 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-copy-resources\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.665049 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-llhc9\" (UniqueName: \"kubernetes.io/projected/23a662b0-e060-413b-a12c-6ef886171e85-kube-api-access-llhc9\") pod \"node-ca-9l7sp\" (UID: \"23a662b0-e060-413b-a12c-6ef886171e85\") " pod="openshift-image-registry/node-ca-9l7sp" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.665526 5108 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-etcd/etcd-crc" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.666239 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"multus-daemon-config\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.665629 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T14:12:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T14:12:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.668313 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"default-dockercfg-g6kgg\"" Dec 12 14:12:19 crc kubenswrapper[5108]: E1212 14:12:19.668647 5108 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T14:12:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T14:12:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T14:12:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T14:12:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T14:12:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T14:12:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T14:12:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T14:12:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b9ecfb5f-c09b-4d5a-8f60-9bac4baf1e5d\\\",\\\"systemUUID\\\":\\\"fba542e1-ce5a-4556-a3dc-e51e5c5391bd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.672194 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.672251 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.672265 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.672286 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.672299 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:12:19Z","lastTransitionTime":"2025-12-12T14:12:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.676836 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.676889 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.676900 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 12 14:12:19 crc kubenswrapper[5108]: E1212 14:12:19.677362 5108 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"etcd-crc\" already exists" pod="openshift-etcd/etcd-crc" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.677518 5108 scope.go:117] "RemoveContainer" containerID="efef1b5c827deceb497f177cb13ac091fbaccfce00faad7e3daee74ab981b6b9" Dec 12 14:12:19 crc kubenswrapper[5108]: E1212 14:12:19.677694 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.680592 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T14:12:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T14:12:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 14:12:19 crc kubenswrapper[5108]: E1212 14:12:19.686461 5108 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T14:12:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T14:12:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T14:12:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T14:12:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T14:12:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T14:12:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T14:12:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T14:12:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b9ecfb5f-c09b-4d5a-8f60-9bac4baf1e5d\\\",\\\"systemUUID\\\":\\\"fba542e1-ce5a-4556-a3dc-e51e5c5391bd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.691156 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.691232 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.691246 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.691266 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.691278 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:12:19Z","lastTransitionTime":"2025-12-12T14:12:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.698531 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b12c1ac8-9dfc-44a6-ab15-039474971984\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:10:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:10:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:10:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:10:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:10:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://932ed3c5bcf1854189f1d3805b64414035dc03af3e619ca1b55677669ac65b25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T14:10:48Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bd5ec9690ba6bd343756405ba75cd1a1c43371e5ae4089055a25b13bd13192fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd5ec9690ba6bd343756405ba75cd1a1c43371e5ae4089055a25b13bd13192fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T14:10:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T14:10:48Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T14:10:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 14:12:19 crc kubenswrapper[5108]: E1212 14:12:19.702529 5108 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T14:12:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T14:12:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T14:12:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T14:12:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T14:12:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T14:12:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T14:12:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T14:12:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b9ecfb5f-c09b-4d5a-8f60-9bac4baf1e5d\\\",\\\"systemUUID\\\":\\\"fba542e1-ce5a-4556-a3dc-e51e5c5391bd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.705566 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.708592 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.708677 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.708691 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.708732 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.708747 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:12:19Z","lastTransitionTime":"2025-12-12T14:12:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.709417 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-wzxz4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8603a7b-d127-481c-8901-fff3b6f9f38b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:12:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:12:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:12:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7tz82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7tz82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T14:12:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-wzxz4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.712410 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 12 14:12:19 crc kubenswrapper[5108]: E1212 14:12:19.719730 5108 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T14:12:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T14:12:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T14:12:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T14:12:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T14:12:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T14:12:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T14:12:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T14:12:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b9ecfb5f-c09b-4d5a-8f60-9bac4baf1e5d\\\",\\\"systemUUID\\\":\\\"fba542e1-ce5a-4556-a3dc-e51e5c5391bd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 14:12:19 crc kubenswrapper[5108]: E1212 14:12:19.719856 5108 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.720973 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.721000 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.721011 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.721025 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.721056 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:12:19Z","lastTransitionTime":"2025-12-12T14:12:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.721730 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-ztpws" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e8c3045-7200-4b39-9531-5ce86ab0b5b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:12:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:12:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:12:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qvkpl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T14:12:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ztpws\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 14:12:19 crc kubenswrapper[5108]: W1212 14:12:19.726332 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfc4541ce_7789_4670_bc75_5c2868e52ce0.slice/crio-8b35af4f8a5d3d7fdfd7b70fb554ab4e3b95b74f1c1866243d06ecb4dc8470c9 WatchSource:0}: Error finding container 8b35af4f8a5d3d7fdfd7b70fb554ab4e3b95b74f1c1866243d06ecb4dc8470c9: Status 404 returned error can't find the container with id 8b35af4f8a5d3d7fdfd7b70fb554ab4e3b95b74f1c1866243d06ecb4dc8470c9 Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.727614 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" event={"ID":"34177974-8d82-49d2-a763-391d0df3bbd8","Type":"ContainerStarted","Data":"052403dbdd107a0bae77d724bf4d20ea4361f984a02f830d100f772a1063265a"} Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.734095 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/934d8f16-46da-4779-8ab8-31b05d1e8b5c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-69wzc\" (UID: \"934d8f16-46da-4779-8ab8-31b05d1e8b5c\") " pod="openshift-ovn-kubernetes/ovnkube-node-69wzc" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.734249 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d8603a7b-d127-481c-8901-fff3b6f9f38b-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-wzxz4\" (UID: \"d8603a7b-d127-481c-8901-fff3b6f9f38b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-wzxz4" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.734332 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/934d8f16-46da-4779-8ab8-31b05d1e8b5c-host-cni-netd\") pod \"ovnkube-node-69wzc\" (UID: \"934d8f16-46da-4779-8ab8-31b05d1e8b5c\") " pod="openshift-ovn-kubernetes/ovnkube-node-69wzc" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.734405 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d8c95a75-0c3b-4caa-9b09-30c6dca73e72-metrics-certs\") pod \"network-metrics-daemon-p4g92\" (UID: \"d8c95a75-0c3b-4caa-9b09-30c6dca73e72\") " pod="openshift-multus/network-metrics-daemon-p4g92" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.734603 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/934d8f16-46da-4779-8ab8-31b05d1e8b5c-node-log\") pod \"ovnkube-node-69wzc\" (UID: \"934d8f16-46da-4779-8ab8-31b05d1e8b5c\") " pod="openshift-ovn-kubernetes/ovnkube-node-69wzc" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.734744 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/934d8f16-46da-4779-8ab8-31b05d1e8b5c-var-lib-openvswitch\") pod \"ovnkube-node-69wzc\" (UID: \"934d8f16-46da-4779-8ab8-31b05d1e8b5c\") " pod="openshift-ovn-kubernetes/ovnkube-node-69wzc" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.734774 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/934d8f16-46da-4779-8ab8-31b05d1e8b5c-etc-openvswitch\") pod \"ovnkube-node-69wzc\" (UID: \"934d8f16-46da-4779-8ab8-31b05d1e8b5c\") " pod="openshift-ovn-kubernetes/ovnkube-node-69wzc" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.734801 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/934d8f16-46da-4779-8ab8-31b05d1e8b5c-host-run-netns\") pod \"ovnkube-node-69wzc\" (UID: \"934d8f16-46da-4779-8ab8-31b05d1e8b5c\") " pod="openshift-ovn-kubernetes/ovnkube-node-69wzc" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.734838 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/934d8f16-46da-4779-8ab8-31b05d1e8b5c-run-openvswitch\") pod \"ovnkube-node-69wzc\" (UID: \"934d8f16-46da-4779-8ab8-31b05d1e8b5c\") " pod="openshift-ovn-kubernetes/ovnkube-node-69wzc" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.734863 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/934d8f16-46da-4779-8ab8-31b05d1e8b5c-run-ovn\") pod \"ovnkube-node-69wzc\" (UID: \"934d8f16-46da-4779-8ab8-31b05d1e8b5c\") " pod="openshift-ovn-kubernetes/ovnkube-node-69wzc" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.734880 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vl29g\" (UniqueName: \"kubernetes.io/projected/93177b5a-a677-4e5b-ac8d-cf64e68bd1d9-kube-api-access-vl29g\") pod \"node-resolver-tgfnd\" (UID: \"93177b5a-a677-4e5b-ac8d-cf64e68bd1d9\") " pod="openshift-dns/node-resolver-tgfnd" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.734920 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/934d8f16-46da-4779-8ab8-31b05d1e8b5c-host-kubelet\") pod \"ovnkube-node-69wzc\" (UID: \"934d8f16-46da-4779-8ab8-31b05d1e8b5c\") " pod="openshift-ovn-kubernetes/ovnkube-node-69wzc" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.734938 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/934d8f16-46da-4779-8ab8-31b05d1e8b5c-run-systemd\") pod \"ovnkube-node-69wzc\" (UID: \"934d8f16-46da-4779-8ab8-31b05d1e8b5c\") " pod="openshift-ovn-kubernetes/ovnkube-node-69wzc" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.734956 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/93177b5a-a677-4e5b-ac8d-cf64e68bd1d9-hosts-file\") pod \"node-resolver-tgfnd\" (UID: \"93177b5a-a677-4e5b-ac8d-cf64e68bd1d9\") " pod="openshift-dns/node-resolver-tgfnd" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.734975 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-965xl\" (UniqueName: \"kubernetes.io/projected/fcb30c12-8b29-461d-ab3e-a76577b664d6-kube-api-access-965xl\") pod \"machine-config-daemon-w294k\" (UID: \"fcb30c12-8b29-461d-ab3e-a76577b664d6\") " pod="openshift-machine-config-operator/machine-config-daemon-w294k" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.734993 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/65a61526-11c5-4c70-ae85-c126f893efd8-system-cni-dir\") pod \"multus-additional-cni-plugins-ctxlm\" (UID: \"65a61526-11c5-4c70-ae85-c126f893efd8\") " pod="openshift-multus/multus-additional-cni-plugins-ctxlm" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.735015 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/65a61526-11c5-4c70-ae85-c126f893efd8-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-ctxlm\" (UID: \"65a61526-11c5-4c70-ae85-c126f893efd8\") " pod="openshift-multus/multus-additional-cni-plugins-ctxlm" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.735033 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/934d8f16-46da-4779-8ab8-31b05d1e8b5c-log-socket\") pod \"ovnkube-node-69wzc\" (UID: \"934d8f16-46da-4779-8ab8-31b05d1e8b5c\") " pod="openshift-ovn-kubernetes/ovnkube-node-69wzc" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.735049 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d8603a7b-d127-481c-8901-fff3b6f9f38b-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-wzxz4\" (UID: \"d8603a7b-d127-481c-8901-fff3b6f9f38b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-wzxz4" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.735070 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/65a61526-11c5-4c70-ae85-c126f893efd8-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-ctxlm\" (UID: \"65a61526-11c5-4c70-ae85-c126f893efd8\") " pod="openshift-multus/multus-additional-cni-plugins-ctxlm" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.735235 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/934d8f16-46da-4779-8ab8-31b05d1e8b5c-env-overrides\") pod \"ovnkube-node-69wzc\" (UID: \"934d8f16-46da-4779-8ab8-31b05d1e8b5c\") " pod="openshift-ovn-kubernetes/ovnkube-node-69wzc" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.735293 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fcb30c12-8b29-461d-ab3e-a76577b664d6-proxy-tls\") pod \"machine-config-daemon-w294k\" (UID: \"fcb30c12-8b29-461d-ab3e-a76577b664d6\") " pod="openshift-machine-config-operator/machine-config-daemon-w294k" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.735342 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/934d8f16-46da-4779-8ab8-31b05d1e8b5c-systemd-units\") pod \"ovnkube-node-69wzc\" (UID: \"934d8f16-46da-4779-8ab8-31b05d1e8b5c\") " pod="openshift-ovn-kubernetes/ovnkube-node-69wzc" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.735364 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/934d8f16-46da-4779-8ab8-31b05d1e8b5c-host-run-ovn-kubernetes\") pod \"ovnkube-node-69wzc\" (UID: \"934d8f16-46da-4779-8ab8-31b05d1e8b5c\") " pod="openshift-ovn-kubernetes/ovnkube-node-69wzc" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.735384 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/934d8f16-46da-4779-8ab8-31b05d1e8b5c-ovn-node-metrics-cert\") pod \"ovnkube-node-69wzc\" (UID: \"934d8f16-46da-4779-8ab8-31b05d1e8b5c\") " pod="openshift-ovn-kubernetes/ovnkube-node-69wzc" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.735404 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d8603a7b-d127-481c-8901-fff3b6f9f38b-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-wzxz4\" (UID: \"d8603a7b-d127-481c-8901-fff3b6f9f38b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-wzxz4" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.735429 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/65a61526-11c5-4c70-ae85-c126f893efd8-cni-binary-copy\") pod \"multus-additional-cni-plugins-ctxlm\" (UID: \"65a61526-11c5-4c70-ae85-c126f893efd8\") " pod="openshift-multus/multus-additional-cni-plugins-ctxlm" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.735520 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ndw6\" (UniqueName: \"kubernetes.io/projected/65a61526-11c5-4c70-ae85-c126f893efd8-kube-api-access-2ndw6\") pod \"multus-additional-cni-plugins-ctxlm\" (UID: \"65a61526-11c5-4c70-ae85-c126f893efd8\") " pod="openshift-multus/multus-additional-cni-plugins-ctxlm" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.735641 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/93177b5a-a677-4e5b-ac8d-cf64e68bd1d9-tmp-dir\") pod \"node-resolver-tgfnd\" (UID: \"93177b5a-a677-4e5b-ac8d-cf64e68bd1d9\") " pod="openshift-dns/node-resolver-tgfnd" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.735677 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/65a61526-11c5-4c70-ae85-c126f893efd8-tuning-conf-dir\") pod \"multus-additional-cni-plugins-ctxlm\" (UID: \"65a61526-11c5-4c70-ae85-c126f893efd8\") " pod="openshift-multus/multus-additional-cni-plugins-ctxlm" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.735708 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmnpq\" (UniqueName: \"kubernetes.io/projected/d8c95a75-0c3b-4caa-9b09-30c6dca73e72-kube-api-access-gmnpq\") pod \"network-metrics-daemon-p4g92\" (UID: \"d8c95a75-0c3b-4caa-9b09-30c6dca73e72\") " pod="openshift-multus/network-metrics-daemon-p4g92" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.735770 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/934d8f16-46da-4779-8ab8-31b05d1e8b5c-ovnkube-config\") pod \"ovnkube-node-69wzc\" (UID: \"934d8f16-46da-4779-8ab8-31b05d1e8b5c\") " pod="openshift-ovn-kubernetes/ovnkube-node-69wzc" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.735808 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qw6q8\" (UniqueName: \"kubernetes.io/projected/934d8f16-46da-4779-8ab8-31b05d1e8b5c-kube-api-access-qw6q8\") pod \"ovnkube-node-69wzc\" (UID: \"934d8f16-46da-4779-8ab8-31b05d1e8b5c\") " pod="openshift-ovn-kubernetes/ovnkube-node-69wzc" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.735836 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/fcb30c12-8b29-461d-ab3e-a76577b664d6-rootfs\") pod \"machine-config-daemon-w294k\" (UID: \"fcb30c12-8b29-461d-ab3e-a76577b664d6\") " pod="openshift-machine-config-operator/machine-config-daemon-w294k" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.735860 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fcb30c12-8b29-461d-ab3e-a76577b664d6-mcd-auth-proxy-config\") pod \"machine-config-daemon-w294k\" (UID: \"fcb30c12-8b29-461d-ab3e-a76577b664d6\") " pod="openshift-machine-config-operator/machine-config-daemon-w294k" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.735886 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/934d8f16-46da-4779-8ab8-31b05d1e8b5c-host-slash\") pod \"ovnkube-node-69wzc\" (UID: \"934d8f16-46da-4779-8ab8-31b05d1e8b5c\") " pod="openshift-ovn-kubernetes/ovnkube-node-69wzc" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.735926 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/934d8f16-46da-4779-8ab8-31b05d1e8b5c-host-cni-bin\") pod \"ovnkube-node-69wzc\" (UID: \"934d8f16-46da-4779-8ab8-31b05d1e8b5c\") " pod="openshift-ovn-kubernetes/ovnkube-node-69wzc" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.735951 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/934d8f16-46da-4779-8ab8-31b05d1e8b5c-ovnkube-script-lib\") pod \"ovnkube-node-69wzc\" (UID: \"934d8f16-46da-4779-8ab8-31b05d1e8b5c\") " pod="openshift-ovn-kubernetes/ovnkube-node-69wzc" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.735984 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7tz82\" (UniqueName: \"kubernetes.io/projected/d8603a7b-d127-481c-8901-fff3b6f9f38b-kube-api-access-7tz82\") pod \"ovnkube-control-plane-57b78d8988-wzxz4\" (UID: \"d8603a7b-d127-481c-8901-fff3b6f9f38b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-wzxz4" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.736016 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/65a61526-11c5-4c70-ae85-c126f893efd8-cnibin\") pod \"multus-additional-cni-plugins-ctxlm\" (UID: \"65a61526-11c5-4c70-ae85-c126f893efd8\") " pod="openshift-multus/multus-additional-cni-plugins-ctxlm" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.736036 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/65a61526-11c5-4c70-ae85-c126f893efd8-os-release\") pod \"multus-additional-cni-plugins-ctxlm\" (UID: \"65a61526-11c5-4c70-ae85-c126f893efd8\") " pod="openshift-multus/multus-additional-cni-plugins-ctxlm" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.742384 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50855968-230d-4603-987e-55b9024123ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:10:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:10:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:10:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://75e4059b371028e5ac87de2124153ec0f015515b7d7cccb6fe820baf9f24e855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T14:10:50Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://094d61ef533aaf36ef1292d5d3cf93d98b904768d2dd2302c105813cf086a12a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T14:10:51Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f81d65dd2849c53db5ca57925ec1c2aa5bab16f99922d614bd44602e714189dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T14:10:51Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4a6bc7349df8b4362b693300a658ae0d6e3d61236dad5a9bbb2a14f0954d2640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T14:10:51Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://0ccda5f0b75af7937025a7de6c6e514dfd9fa150a51bc9b030ed54f2fadb4e47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T14:10:50Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://65794f3f33dea6a6573ec10459ed3d71dddf5f88abe63c08ea870e714bd3f860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65794f3f33dea6a6573ec10459ed3d71dddf5f88abe63c08ea870e714bd3f860\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T14:10:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T14:10:48Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://0d5aa17a706a840cf2378c515099d25962d06e875982e7f63fb86384306330aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d5aa17a706a840cf2378c515099d25962d06e875982e7f63fb86384306330aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T14:10:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T14:10:48Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://20b42b583304f935da956f0e33a5b2c9f6cec4f570c43a0daab976b1cf6c01da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20b42b583304f935da956f0e33a5b2c9f6cec4f570c43a0daab976b1cf6c01da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T14:10:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T14:10:49Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T14:10:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.750679 5108 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.753980 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T14:12:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T14:12:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.756252 5108 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.756773 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.764217 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T14:12:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T14:12:19Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.781397 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69wzc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"934d8f16-46da-4779-8ab8-31b05d1e8b5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:12:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:12:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:12:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:12:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qw6q8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qw6q8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qw6q8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qw6q8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qw6q8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qw6q8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qw6q8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qw6q8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qw6q8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T14:12:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-69wzc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.791330 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w294k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcb30c12-8b29-461d-ab3e-a76577b664d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:12:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:12:19Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:12:19Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-965xl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-965xl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T14:12:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w294k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.802399 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ctxlm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65a61526-11c5-4c70-ae85-c126f893efd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:12:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:12:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:12:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:12:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ndw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ndw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ndw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ndw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ndw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ndw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ndw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T14:12:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ctxlm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.812335 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c83071b0-146c-4768-adbb-21a30f71994e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:10:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:10:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:10:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://c86b90b7193b7901a601e3c65d607b6a5d462bb1a15a5010bf19e9c6d6036966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T14:10:49Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a68f2bbf194146ec111e9aa5753f597962de280e957a0a3ef23dd79173160003\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T14:10:50Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d58b2258046b38d981e06423ad8bc68ff6163533de83a36d6beda29bc02b1da8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T14:10:49Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://efef1b5c827deceb497f177cb13ac091fbaccfce00faad7e3daee74ab981b6b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efef1b5c827deceb497f177cb13ac091fbaccfce00faad7e3daee74ab981b6b9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T14:11:54Z\\\",\\\"message\\\":\\\" envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"ClientsAllowCBOR\\\\\\\" enabled=false\\\\nW1212 14:11:54.208614 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1212 14:11:54.208850 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI1212 14:11:54.209664 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-16874539/tls.crt::/tmp/serving-cert-16874539/tls.key\\\\\\\"\\\\nI1212 14:11:54.758712 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 14:11:54.760779 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 14:11:54.760795 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 14:11:54.760814 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 14:11:54.760818 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 14:11:54.764262 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 14:11:54.764286 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 14:11:54.764291 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 14:11:54.764296 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 14:11:54.764300 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 14:11:54.764303 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 14:11:54.764306 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1212 14:11:54.764492 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1212 14:11:54.766009 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T14:11:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://9d9ecc493821faa4c2f5a082084284166d20dc85fb2afd25d3e8014cb324543f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T14:10:50Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://e70ff1b9b3269f3fe2eabb3406b5c246c2bf71aebcb12730e80991aedd1f8ece\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e70ff1b9b3269f3fe2eabb3406b5c246c2bf71aebcb12730e80991aedd1f8ece\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T14:10:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T14:10:48Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T14:10:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.822668 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.822772 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.822832 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.822893 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.822944 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:12:19Z","lastTransitionTime":"2025-12-12T14:12:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.827878 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T14:12:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T14:12:19Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.836472 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m5lgh\" (UniqueName: \"kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.836506 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.836525 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.836545 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qgrkj\" (UniqueName: \"kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj\") pod \"42a11a02-47e1-488f-b270-2679d3298b0e\" (UID: \"42a11a02-47e1-488f-b270-2679d3298b0e\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.836561 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g4lr\" (UniqueName: \"kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr\") pod \"f7e2c886-118e-43bb-bef1-c78134de392b\" (UID: \"f7e2c886-118e-43bb-bef1-c78134de392b\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.836579 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.836600 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w94wk\" (UniqueName: \"kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.836616 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.836633 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9stx\" (UniqueName: \"kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.836651 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.836727 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.836746 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.836763 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.837393 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.838549 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj" (OuterVolumeSpecName: "kube-api-access-qgrkj") pod "42a11a02-47e1-488f-b270-2679d3298b0e" (UID: "42a11a02-47e1-488f-b270-2679d3298b0e"). InnerVolumeSpecName "kube-api-access-qgrkj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.839105 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk" (OuterVolumeSpecName: "kube-api-access-w94wk") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "kube-api-access-w94wk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.839153 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.839263 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.839312 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.839371 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.839432 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4smf\" (UniqueName: \"kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf\") pod \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\" (UID: \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.839486 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.839533 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.839547 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.839583 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xnxbn\" (UniqueName: \"kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.839561 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.839640 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.839680 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.839727 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wbmqg\" (UniqueName: \"kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.839728 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.839773 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.839816 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.839873 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.839923 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.839974 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zg8nc\" (UniqueName: \"kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.840023 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nmmzf\" (UniqueName: \"kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.840070 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.840152 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.840152 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config" (OuterVolumeSpecName: "config") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.840264 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.840191 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.840387 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.840421 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.840447 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.840474 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.840503 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7jjkz\" (UniqueName: \"kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.840547 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.840572 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l87hs\" (UniqueName: \"kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs\") pod \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\" (UID: \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.840594 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.840612 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.840634 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pgx6b\" (UniqueName: \"kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b\") pod \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\" (UID: \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.840655 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dztfv\" (UniqueName: \"kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.840675 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.840698 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.840721 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzt4w\" (UniqueName: \"kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w\") pod \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\" (UID: \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.840742 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99zj9\" (UniqueName: \"kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.840763 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5lcfw\" (UniqueName: \"kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.840784 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.840738 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn" (OuterVolumeSpecName: "kube-api-access-xnxbn") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "kube-api-access-xnxbn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.840809 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.840860 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.840918 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbc2l\" (UniqueName: \"kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.840952 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.840978 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls\") pod \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\" (UID: \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.841009 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.841039 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.841069 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.841072 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7cps\" (UniqueName: \"kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps\") pod \"af41de71-79cf-4590-bbe9-9e8b848862cb\" (UID: \"af41de71-79cf-4590-bbe9-9e8b848862cb\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.841157 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hckvg\" (UniqueName: \"kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.841187 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.841218 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grwfz\" (UniqueName: \"kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.841254 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.841281 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.841311 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.841438 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps" (OuterVolumeSpecName: "kube-api-access-d7cps") pod "af41de71-79cf-4590-bbe9-9e8b848862cb" (UID: "af41de71-79cf-4590-bbe9-9e8b848862cb"). InnerVolumeSpecName "kube-api-access-d7cps". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.841466 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.841495 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.841528 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.841780 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-twvbl\" (UniqueName: \"kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.841813 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tkdh6\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.841846 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.841881 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.841907 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.841939 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.841966 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.841990 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.842028 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.842059 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6dmhf\" (UniqueName: \"kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.842175 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.842241 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert\") pod \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\" (UID: \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.842269 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.842292 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.842332 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.843040 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.843141 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.841843 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "6ee8fbd3-1f81-4666-96da-5afc70819f1a" (UID: "6ee8fbd3-1f81-4666-96da-5afc70819f1a"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.842160 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.842672 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.843138 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.843419 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b" (OuterVolumeSpecName: "kube-api-access-pgx6b") pod "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" (UID: "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4"). InnerVolumeSpecName "kube-api-access-pgx6b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.843602 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9" (OuterVolumeSpecName: "kube-api-access-99zj9") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "kube-api-access-99zj9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.843753 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz" (OuterVolumeSpecName: "kube-api-access-7jjkz") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "kube-api-access-7jjkz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.843917 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf" (OuterVolumeSpecName: "kube-api-access-nmmzf") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "kube-api-access-nmmzf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.843868 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs" (OuterVolumeSpecName: "kube-api-access-l87hs") pod "5ebfebf6-3ecd-458e-943f-bb25b52e2718" (UID: "5ebfebf6-3ecd-458e-943f-bb25b52e2718"). InnerVolumeSpecName "kube-api-access-l87hs". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.843981 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap" (OuterVolumeSpecName: "whereabouts-flatfile-configmap") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "whereabouts-flatfile-configmap". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.844060 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.844326 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.844814 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config" (OuterVolumeSpecName: "config") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.844831 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config" (OuterVolumeSpecName: "config") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.844841 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.844880 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.845009 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.845101 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.845190 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv" (OuterVolumeSpecName: "kube-api-access-dztfv") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "kube-api-access-dztfv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.845237 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config" (OuterVolumeSpecName: "config") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.845257 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities" (OuterVolumeSpecName: "utilities") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.845480 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.845497 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca" (OuterVolumeSpecName: "client-ca") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.845527 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.845567 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.845591 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.845606 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6rmnv\" (UniqueName: \"kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.845647 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddlk9\" (UniqueName: \"kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.845938 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.846026 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf" (OuterVolumeSpecName: "kube-api-access-6dmhf") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "kube-api-access-6dmhf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.846070 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9" (OuterVolumeSpecName: "kube-api-access-ddlk9") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "kube-api-access-ddlk9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.846071 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.846164 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.846187 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.846216 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.846355 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.846513 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.846534 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.846631 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8pskd\" (UniqueName: \"kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.846680 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.846714 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.846821 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfp5s\" (UniqueName: \"kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.846872 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9z4sw\" (UniqueName: \"kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.846904 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.847049 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.847070 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s" (OuterVolumeSpecName: "kube-api-access-xfp5s") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "kube-api-access-xfp5s". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.847338 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.847427 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5rsr\" (UniqueName: \"kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.847448 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.847557 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.847585 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images" (OuterVolumeSpecName: "images") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.847458 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.848201 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs\") pod \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\" (UID: \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.848243 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.848399 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xxfcv\" (UniqueName: \"kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.848506 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd" (OuterVolumeSpecName: "kube-api-access-8pskd") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "kube-api-access-8pskd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.848705 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.848909 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.848972 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.849005 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca\") pod \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\" (UID: \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.849030 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.849058 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.849108 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wj4qr\" (UniqueName: \"kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.849136 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.849154 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv" (OuterVolumeSpecName: "kube-api-access-xxfcv") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "kube-api-access-xxfcv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.849171 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.849152 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T14:12:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T14:12:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.849268 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp" (OuterVolumeSpecName: "tmp") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.849416 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv" (OuterVolumeSpecName: "kube-api-access-6rmnv") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "kube-api-access-6rmnv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.849671 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pddnv\" (UniqueName: \"kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv\") pod \"e093be35-bb62-4843-b2e8-094545761610\" (UID: \"e093be35-bb62-4843-b2e8-094545761610\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.849839 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4hb7m\" (UniqueName: \"kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.849904 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.849913 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca" (OuterVolumeSpecName: "client-ca") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.849963 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.849996 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.850059 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.850097 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.850124 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zth6t\" (UniqueName: \"kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.850146 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zsb9b\" (UniqueName: \"kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.850170 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.850216 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp" (OuterVolumeSpecName: "tmp") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.850238 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.850276 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.850310 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.850343 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.850370 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.850404 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.850431 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ks6v2\" (UniqueName: \"kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.850439 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config" (OuterVolumeSpecName: "config") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.850536 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.850586 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ws8zz\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.850605 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mjwtd\" (UniqueName: \"kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.850627 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ftwb6\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.850649 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pllx6\" (UniqueName: \"kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.850676 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.850695 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.850717 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.850738 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nspp\" (UniqueName: \"kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.850757 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.850792 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.850813 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.850833 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.850851 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.850871 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.850923 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.850943 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.850989 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.851015 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.851036 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls\") pod \"42a11a02-47e1-488f-b270-2679d3298b0e\" (UID: \"42a11a02-47e1-488f-b270-2679d3298b0e\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.851054 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.851101 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.851122 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.851140 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.851160 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptkcf\" (UniqueName: \"kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.851180 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.851199 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.851218 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.851239 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.851260 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert\") pod \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\" (UID: \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.851283 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.851311 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.851941 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.852045 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.852069 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.852106 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mfzkj\" (UniqueName: \"kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj\") pod \"0effdbcf-dd7d-404d-9d48-77536d665a5d\" (UID: \"0effdbcf-dd7d-404d-9d48-77536d665a5d\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.852225 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.852258 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.852291 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.852324 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.852353 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.852374 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.852407 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.852438 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tknt7\" (UniqueName: \"kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.852469 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.852551 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hm9x7\" (UniqueName: \"kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.852581 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.852611 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.852633 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.852683 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4tqq\" (UniqueName: \"kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq\") pod \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\" (UID: \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.852704 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.852724 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.852744 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir\") pod \"f7e2c886-118e-43bb-bef1-c78134de392b\" (UID: \"f7e2c886-118e-43bb-bef1-c78134de392b\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.852771 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.852793 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qqbfk\" (UniqueName: \"kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.852814 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.852836 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.852855 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.852882 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.852910 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.852935 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.852956 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.852980 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.853002 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.853022 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.853100 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.853124 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.853145 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.853165 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.853187 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.853209 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-26xrl\" (UniqueName: \"kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl\") pod \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\" (UID: \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.853228 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.853249 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.853272 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94l9h\" (UniqueName: \"kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.853303 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4g8ts\" (UniqueName: \"kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.853332 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9vsz9\" (UniqueName: \"kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.853418 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.853441 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.853466 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.853497 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.853516 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs\") pod \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\" (UID: \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.853578 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.853602 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.853624 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.853646 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nb9c\" (UniqueName: \"kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.853670 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m26jq\" (UniqueName: \"kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.853694 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.853725 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.853755 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.853933 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/934d8f16-46da-4779-8ab8-31b05d1e8b5c-var-lib-openvswitch\") pod \"ovnkube-node-69wzc\" (UID: \"934d8f16-46da-4779-8ab8-31b05d1e8b5c\") " pod="openshift-ovn-kubernetes/ovnkube-node-69wzc" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.853959 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/934d8f16-46da-4779-8ab8-31b05d1e8b5c-etc-openvswitch\") pod \"ovnkube-node-69wzc\" (UID: \"934d8f16-46da-4779-8ab8-31b05d1e8b5c\") " pod="openshift-ovn-kubernetes/ovnkube-node-69wzc" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.853990 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/934d8f16-46da-4779-8ab8-31b05d1e8b5c-host-run-netns\") pod \"ovnkube-node-69wzc\" (UID: \"934d8f16-46da-4779-8ab8-31b05d1e8b5c\") " pod="openshift-ovn-kubernetes/ovnkube-node-69wzc" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.854012 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/934d8f16-46da-4779-8ab8-31b05d1e8b5c-run-openvswitch\") pod \"ovnkube-node-69wzc\" (UID: \"934d8f16-46da-4779-8ab8-31b05d1e8b5c\") " pod="openshift-ovn-kubernetes/ovnkube-node-69wzc" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.854038 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/934d8f16-46da-4779-8ab8-31b05d1e8b5c-run-ovn\") pod \"ovnkube-node-69wzc\" (UID: \"934d8f16-46da-4779-8ab8-31b05d1e8b5c\") " pod="openshift-ovn-kubernetes/ovnkube-node-69wzc" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.854535 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vl29g\" (UniqueName: \"kubernetes.io/projected/93177b5a-a677-4e5b-ac8d-cf64e68bd1d9-kube-api-access-vl29g\") pod \"node-resolver-tgfnd\" (UID: \"93177b5a-a677-4e5b-ac8d-cf64e68bd1d9\") " pod="openshift-dns/node-resolver-tgfnd" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.854571 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/1e8c3045-7200-4b39-9531-5ce86ab0b5b5-host-run-k8s-cni-cncf-io\") pod \"multus-ztpws\" (UID: \"1e8c3045-7200-4b39-9531-5ce86ab0b5b5\") " pod="openshift-multus/multus-ztpws" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.854616 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/934d8f16-46da-4779-8ab8-31b05d1e8b5c-host-kubelet\") pod \"ovnkube-node-69wzc\" (UID: \"934d8f16-46da-4779-8ab8-31b05d1e8b5c\") " pod="openshift-ovn-kubernetes/ovnkube-node-69wzc" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.854771 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/934d8f16-46da-4779-8ab8-31b05d1e8b5c-run-systemd\") pod \"ovnkube-node-69wzc\" (UID: \"934d8f16-46da-4779-8ab8-31b05d1e8b5c\") " pod="openshift-ovn-kubernetes/ovnkube-node-69wzc" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.854798 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/93177b5a-a677-4e5b-ac8d-cf64e68bd1d9-hosts-file\") pod \"node-resolver-tgfnd\" (UID: \"93177b5a-a677-4e5b-ac8d-cf64e68bd1d9\") " pod="openshift-dns/node-resolver-tgfnd" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.854849 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-965xl\" (UniqueName: \"kubernetes.io/projected/fcb30c12-8b29-461d-ab3e-a76577b664d6-kube-api-access-965xl\") pod \"machine-config-daemon-w294k\" (UID: \"fcb30c12-8b29-461d-ab3e-a76577b664d6\") " pod="openshift-machine-config-operator/machine-config-daemon-w294k" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.854944 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/65a61526-11c5-4c70-ae85-c126f893efd8-system-cni-dir\") pod \"multus-additional-cni-plugins-ctxlm\" (UID: \"65a61526-11c5-4c70-ae85-c126f893efd8\") " pod="openshift-multus/multus-additional-cni-plugins-ctxlm" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.856106 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/65a61526-11c5-4c70-ae85-c126f893efd8-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-ctxlm\" (UID: \"65a61526-11c5-4c70-ae85-c126f893efd8\") " pod="openshift-multus/multus-additional-cni-plugins-ctxlm" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.856148 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/934d8f16-46da-4779-8ab8-31b05d1e8b5c-log-socket\") pod \"ovnkube-node-69wzc\" (UID: \"934d8f16-46da-4779-8ab8-31b05d1e8b5c\") " pod="openshift-ovn-kubernetes/ovnkube-node-69wzc" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.856171 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d8603a7b-d127-481c-8901-fff3b6f9f38b-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-wzxz4\" (UID: \"d8603a7b-d127-481c-8901-fff3b6f9f38b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-wzxz4" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.856193 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/65a61526-11c5-4c70-ae85-c126f893efd8-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-ctxlm\" (UID: \"65a61526-11c5-4c70-ae85-c126f893efd8\") " pod="openshift-multus/multus-additional-cni-plugins-ctxlm" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.856355 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/1e8c3045-7200-4b39-9531-5ce86ab0b5b5-os-release\") pod \"multus-ztpws\" (UID: \"1e8c3045-7200-4b39-9531-5ce86ab0b5b5\") " pod="openshift-multus/multus-ztpws" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.856412 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/1e8c3045-7200-4b39-9531-5ce86ab0b5b5-hostroot\") pod \"multus-ztpws\" (UID: \"1e8c3045-7200-4b39-9531-5ce86ab0b5b5\") " pod="openshift-multus/multus-ztpws" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.856447 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/934d8f16-46da-4779-8ab8-31b05d1e8b5c-env-overrides\") pod \"ovnkube-node-69wzc\" (UID: \"934d8f16-46da-4779-8ab8-31b05d1e8b5c\") " pod="openshift-ovn-kubernetes/ovnkube-node-69wzc" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.856474 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fcb30c12-8b29-461d-ab3e-a76577b664d6-proxy-tls\") pod \"machine-config-daemon-w294k\" (UID: \"fcb30c12-8b29-461d-ab3e-a76577b664d6\") " pod="openshift-machine-config-operator/machine-config-daemon-w294k" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.856495 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1e8c3045-7200-4b39-9531-5ce86ab0b5b5-system-cni-dir\") pod \"multus-ztpws\" (UID: \"1e8c3045-7200-4b39-9531-5ce86ab0b5b5\") " pod="openshift-multus/multus-ztpws" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.856518 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/1e8c3045-7200-4b39-9531-5ce86ab0b5b5-host-var-lib-cni-multus\") pod \"multus-ztpws\" (UID: \"1e8c3045-7200-4b39-9531-5ce86ab0b5b5\") " pod="openshift-multus/multus-ztpws" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.856537 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1e8c3045-7200-4b39-9531-5ce86ab0b5b5-etc-kubernetes\") pod \"multus-ztpws\" (UID: \"1e8c3045-7200-4b39-9531-5ce86ab0b5b5\") " pod="openshift-multus/multus-ztpws" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.856610 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/934d8f16-46da-4779-8ab8-31b05d1e8b5c-systemd-units\") pod \"ovnkube-node-69wzc\" (UID: \"934d8f16-46da-4779-8ab8-31b05d1e8b5c\") " pod="openshift-ovn-kubernetes/ovnkube-node-69wzc" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.856684 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/1e8c3045-7200-4b39-9531-5ce86ab0b5b5-multus-socket-dir-parent\") pod \"multus-ztpws\" (UID: \"1e8c3045-7200-4b39-9531-5ce86ab0b5b5\") " pod="openshift-multus/multus-ztpws" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.856716 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/934d8f16-46da-4779-8ab8-31b05d1e8b5c-host-run-ovn-kubernetes\") pod \"ovnkube-node-69wzc\" (UID: \"934d8f16-46da-4779-8ab8-31b05d1e8b5c\") " pod="openshift-ovn-kubernetes/ovnkube-node-69wzc" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.858798 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/934d8f16-46da-4779-8ab8-31b05d1e8b5c-ovn-node-metrics-cert\") pod \"ovnkube-node-69wzc\" (UID: \"934d8f16-46da-4779-8ab8-31b05d1e8b5c\") " pod="openshift-ovn-kubernetes/ovnkube-node-69wzc" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.858855 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d8603a7b-d127-481c-8901-fff3b6f9f38b-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-wzxz4\" (UID: \"d8603a7b-d127-481c-8901-fff3b6f9f38b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-wzxz4" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.858957 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/65a61526-11c5-4c70-ae85-c126f893efd8-cni-binary-copy\") pod \"multus-additional-cni-plugins-ctxlm\" (UID: \"65a61526-11c5-4c70-ae85-c126f893efd8\") " pod="openshift-multus/multus-additional-cni-plugins-ctxlm" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.858997 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2ndw6\" (UniqueName: \"kubernetes.io/projected/65a61526-11c5-4c70-ae85-c126f893efd8-kube-api-access-2ndw6\") pod \"multus-additional-cni-plugins-ctxlm\" (UID: \"65a61526-11c5-4c70-ae85-c126f893efd8\") " pod="openshift-multus/multus-additional-cni-plugins-ctxlm" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.859041 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/1e8c3045-7200-4b39-9531-5ce86ab0b5b5-cnibin\") pod \"multus-ztpws\" (UID: \"1e8c3045-7200-4b39-9531-5ce86ab0b5b5\") " pod="openshift-multus/multus-ztpws" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.859073 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/1e8c3045-7200-4b39-9531-5ce86ab0b5b5-multus-daemon-config\") pod \"multus-ztpws\" (UID: \"1e8c3045-7200-4b39-9531-5ce86ab0b5b5\") " pod="openshift-multus/multus-ztpws" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.859148 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/93177b5a-a677-4e5b-ac8d-cf64e68bd1d9-tmp-dir\") pod \"node-resolver-tgfnd\" (UID: \"93177b5a-a677-4e5b-ac8d-cf64e68bd1d9\") " pod="openshift-dns/node-resolver-tgfnd" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.859178 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/65a61526-11c5-4c70-ae85-c126f893efd8-tuning-conf-dir\") pod \"multus-additional-cni-plugins-ctxlm\" (UID: \"65a61526-11c5-4c70-ae85-c126f893efd8\") " pod="openshift-multus/multus-additional-cni-plugins-ctxlm" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.859204 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1e8c3045-7200-4b39-9531-5ce86ab0b5b5-host-run-netns\") pod \"multus-ztpws\" (UID: \"1e8c3045-7200-4b39-9531-5ce86ab0b5b5\") " pod="openshift-multus/multus-ztpws" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.859246 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gmnpq\" (UniqueName: \"kubernetes.io/projected/d8c95a75-0c3b-4caa-9b09-30c6dca73e72-kube-api-access-gmnpq\") pod \"network-metrics-daemon-p4g92\" (UID: \"d8c95a75-0c3b-4caa-9b09-30c6dca73e72\") " pod="openshift-multus/network-metrics-daemon-p4g92" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.859273 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/934d8f16-46da-4779-8ab8-31b05d1e8b5c-ovnkube-config\") pod \"ovnkube-node-69wzc\" (UID: \"934d8f16-46da-4779-8ab8-31b05d1e8b5c\") " pod="openshift-ovn-kubernetes/ovnkube-node-69wzc" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.859301 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qw6q8\" (UniqueName: \"kubernetes.io/projected/934d8f16-46da-4779-8ab8-31b05d1e8b5c-kube-api-access-qw6q8\") pod \"ovnkube-node-69wzc\" (UID: \"934d8f16-46da-4779-8ab8-31b05d1e8b5c\") " pod="openshift-ovn-kubernetes/ovnkube-node-69wzc" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.859331 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/fcb30c12-8b29-461d-ab3e-a76577b664d6-rootfs\") pod \"machine-config-daemon-w294k\" (UID: \"fcb30c12-8b29-461d-ab3e-a76577b664d6\") " pod="openshift-machine-config-operator/machine-config-daemon-w294k" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.859357 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fcb30c12-8b29-461d-ab3e-a76577b664d6-mcd-auth-proxy-config\") pod \"machine-config-daemon-w294k\" (UID: \"fcb30c12-8b29-461d-ab3e-a76577b664d6\") " pod="openshift-machine-config-operator/machine-config-daemon-w294k" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.859362 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.859387 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/934d8f16-46da-4779-8ab8-31b05d1e8b5c-host-slash\") pod \"ovnkube-node-69wzc\" (UID: \"934d8f16-46da-4779-8ab8-31b05d1e8b5c\") " pod="openshift-ovn-kubernetes/ovnkube-node-69wzc" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.859437 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/934d8f16-46da-4779-8ab8-31b05d1e8b5c-host-slash\") pod \"ovnkube-node-69wzc\" (UID: \"934d8f16-46da-4779-8ab8-31b05d1e8b5c\") " pod="openshift-ovn-kubernetes/ovnkube-node-69wzc" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.859499 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/934d8f16-46da-4779-8ab8-31b05d1e8b5c-host-cni-bin\") pod \"ovnkube-node-69wzc\" (UID: \"934d8f16-46da-4779-8ab8-31b05d1e8b5c\") " pod="openshift-ovn-kubernetes/ovnkube-node-69wzc" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.859579 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/934d8f16-46da-4779-8ab8-31b05d1e8b5c-ovnkube-script-lib\") pod \"ovnkube-node-69wzc\" (UID: \"934d8f16-46da-4779-8ab8-31b05d1e8b5c\") " pod="openshift-ovn-kubernetes/ovnkube-node-69wzc" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.859640 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.859656 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7tz82\" (UniqueName: \"kubernetes.io/projected/d8603a7b-d127-481c-8901-fff3b6f9f38b-kube-api-access-7tz82\") pod \"ovnkube-control-plane-57b78d8988-wzxz4\" (UID: \"d8603a7b-d127-481c-8901-fff3b6f9f38b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-wzxz4" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.859723 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/65a61526-11c5-4c70-ae85-c126f893efd8-cnibin\") pod \"multus-additional-cni-plugins-ctxlm\" (UID: \"65a61526-11c5-4c70-ae85-c126f893efd8\") " pod="openshift-multus/multus-additional-cni-plugins-ctxlm" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.859757 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/65a61526-11c5-4c70-ae85-c126f893efd8-os-release\") pod \"multus-additional-cni-plugins-ctxlm\" (UID: \"65a61526-11c5-4c70-ae85-c126f893efd8\") " pod="openshift-multus/multus-additional-cni-plugins-ctxlm" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.859814 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/1e8c3045-7200-4b39-9531-5ce86ab0b5b5-cni-binary-copy\") pod \"multus-ztpws\" (UID: \"1e8c3045-7200-4b39-9531-5ce86ab0b5b5\") " pod="openshift-multus/multus-ztpws" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.859858 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/65a61526-11c5-4c70-ae85-c126f893efd8-cni-binary-copy\") pod \"multus-additional-cni-plugins-ctxlm\" (UID: \"65a61526-11c5-4c70-ae85-c126f893efd8\") " pod="openshift-multus/multus-additional-cni-plugins-ctxlm" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.859855 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/1e8c3045-7200-4b39-9531-5ce86ab0b5b5-host-run-multus-certs\") pod \"multus-ztpws\" (UID: \"1e8c3045-7200-4b39-9531-5ce86ab0b5b5\") " pod="openshift-multus/multus-ztpws" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.860272 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b" (OuterVolumeSpecName: "kube-api-access-zsb9b") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "kube-api-access-zsb9b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.860615 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d8603a7b-d127-481c-8901-fff3b6f9f38b-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-wzxz4\" (UID: \"d8603a7b-d127-481c-8901-fff3b6f9f38b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-wzxz4" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.860957 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq" (OuterVolumeSpecName: "kube-api-access-m26jq") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "kube-api-access-m26jq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.861120 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/934d8f16-46da-4779-8ab8-31b05d1e8b5c-run-systemd\") pod \"ovnkube-node-69wzc\" (UID: \"934d8f16-46da-4779-8ab8-31b05d1e8b5c\") " pod="openshift-ovn-kubernetes/ovnkube-node-69wzc" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.861187 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/93177b5a-a677-4e5b-ac8d-cf64e68bd1d9-hosts-file\") pod \"node-resolver-tgfnd\" (UID: \"93177b5a-a677-4e5b-ac8d-cf64e68bd1d9\") " pod="openshift-dns/node-resolver-tgfnd" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.861408 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/65a61526-11c5-4c70-ae85-c126f893efd8-system-cni-dir\") pod \"multus-additional-cni-plugins-ctxlm\" (UID: \"65a61526-11c5-4c70-ae85-c126f893efd8\") " pod="openshift-multus/multus-additional-cni-plugins-ctxlm" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.861920 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/934d8f16-46da-4779-8ab8-31b05d1e8b5c-ovnkube-config\") pod \"ovnkube-node-69wzc\" (UID: \"934d8f16-46da-4779-8ab8-31b05d1e8b5c\") " pod="openshift-ovn-kubernetes/ovnkube-node-69wzc" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.862137 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/65a61526-11c5-4c70-ae85-c126f893efd8-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-ctxlm\" (UID: \"65a61526-11c5-4c70-ae85-c126f893efd8\") " pod="openshift-multus/multus-additional-cni-plugins-ctxlm" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.862213 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/934d8f16-46da-4779-8ab8-31b05d1e8b5c-log-socket\") pod \"ovnkube-node-69wzc\" (UID: \"934d8f16-46da-4779-8ab8-31b05d1e8b5c\") " pod="openshift-ovn-kubernetes/ovnkube-node-69wzc" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.862660 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/93177b5a-a677-4e5b-ac8d-cf64e68bd1d9-tmp-dir\") pod \"node-resolver-tgfnd\" (UID: \"93177b5a-a677-4e5b-ac8d-cf64e68bd1d9\") " pod="openshift-dns/node-resolver-tgfnd" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.864115 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/65a61526-11c5-4c70-ae85-c126f893efd8-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-ctxlm\" (UID: \"65a61526-11c5-4c70-ae85-c126f893efd8\") " pod="openshift-multus/multus-additional-cni-plugins-ctxlm" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.864215 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/934d8f16-46da-4779-8ab8-31b05d1e8b5c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-69wzc\" (UID: \"934d8f16-46da-4779-8ab8-31b05d1e8b5c\") " pod="openshift-ovn-kubernetes/ovnkube-node-69wzc" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.864276 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d8603a7b-d127-481c-8901-fff3b6f9f38b-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-wzxz4\" (UID: \"d8603a7b-d127-481c-8901-fff3b6f9f38b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-wzxz4" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.864316 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/934d8f16-46da-4779-8ab8-31b05d1e8b5c-host-cni-netd\") pod \"ovnkube-node-69wzc\" (UID: \"934d8f16-46da-4779-8ab8-31b05d1e8b5c\") " pod="openshift-ovn-kubernetes/ovnkube-node-69wzc" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.864837 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/65a61526-11c5-4c70-ae85-c126f893efd8-tuning-conf-dir\") pod \"multus-additional-cni-plugins-ctxlm\" (UID: \"65a61526-11c5-4c70-ae85-c126f893efd8\") " pod="openshift-multus/multus-additional-cni-plugins-ctxlm" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.865033 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/934d8f16-46da-4779-8ab8-31b05d1e8b5c-env-overrides\") pod \"ovnkube-node-69wzc\" (UID: \"934d8f16-46da-4779-8ab8-31b05d1e8b5c\") " pod="openshift-ovn-kubernetes/ovnkube-node-69wzc" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.865043 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9l7sp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23a662b0-e060-413b-a12c-6ef886171e85\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:12:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:12:19Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:12:19Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llhc9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T14:12:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9l7sp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.865121 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/934d8f16-46da-4779-8ab8-31b05d1e8b5c-host-run-ovn-kubernetes\") pod \"ovnkube-node-69wzc\" (UID: \"934d8f16-46da-4779-8ab8-31b05d1e8b5c\") " pod="openshift-ovn-kubernetes/ovnkube-node-69wzc" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.865183 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/934d8f16-46da-4779-8ab8-31b05d1e8b5c-host-cni-bin\") pod \"ovnkube-node-69wzc\" (UID: \"934d8f16-46da-4779-8ab8-31b05d1e8b5c\") " pod="openshift-ovn-kubernetes/ovnkube-node-69wzc" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.865248 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/fcb30c12-8b29-461d-ab3e-a76577b664d6-rootfs\") pod \"machine-config-daemon-w294k\" (UID: \"fcb30c12-8b29-461d-ab3e-a76577b664d6\") " pod="openshift-machine-config-operator/machine-config-daemon-w294k" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.865385 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.865392 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl" (OuterVolumeSpecName: "kube-api-access-26xrl") pod "a208c9c2-333b-4b4a-be0d-bc32ec38a821" (UID: "a208c9c2-333b-4b4a-be0d-bc32ec38a821"). InnerVolumeSpecName "kube-api-access-26xrl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.866226 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/934d8f16-46da-4779-8ab8-31b05d1e8b5c-etc-openvswitch\") pod \"ovnkube-node-69wzc\" (UID: \"934d8f16-46da-4779-8ab8-31b05d1e8b5c\") " pod="openshift-ovn-kubernetes/ovnkube-node-69wzc" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.866339 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/934d8f16-46da-4779-8ab8-31b05d1e8b5c-run-openvswitch\") pod \"ovnkube-node-69wzc\" (UID: \"934d8f16-46da-4779-8ab8-31b05d1e8b5c\") " pod="openshift-ovn-kubernetes/ovnkube-node-69wzc" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.866433 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp" (OuterVolumeSpecName: "tmp") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.866806 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/934d8f16-46da-4779-8ab8-31b05d1e8b5c-var-lib-openvswitch\") pod \"ovnkube-node-69wzc\" (UID: \"934d8f16-46da-4779-8ab8-31b05d1e8b5c\") " pod="openshift-ovn-kubernetes/ovnkube-node-69wzc" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.867019 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.867093 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.867674 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.867759 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/65a61526-11c5-4c70-ae85-c126f893efd8-cnibin\") pod \"multus-additional-cni-plugins-ctxlm\" (UID: \"65a61526-11c5-4c70-ae85-c126f893efd8\") " pod="openshift-multus/multus-additional-cni-plugins-ctxlm" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.867796 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/934d8f16-46da-4779-8ab8-31b05d1e8b5c-systemd-units\") pod \"ovnkube-node-69wzc\" (UID: \"934d8f16-46da-4779-8ab8-31b05d1e8b5c\") " pod="openshift-ovn-kubernetes/ovnkube-node-69wzc" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.867823 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fcb30c12-8b29-461d-ab3e-a76577b664d6-mcd-auth-proxy-config\") pod \"machine-config-daemon-w294k\" (UID: \"fcb30c12-8b29-461d-ab3e-a76577b664d6\") " pod="openshift-machine-config-operator/machine-config-daemon-w294k" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.867849 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.867856 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/65a61526-11c5-4c70-ae85-c126f893efd8-os-release\") pod \"multus-additional-cni-plugins-ctxlm\" (UID: \"65a61526-11c5-4c70-ae85-c126f893efd8\") " pod="openshift-multus/multus-additional-cni-plugins-ctxlm" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.868164 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.868367 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd" (OuterVolumeSpecName: "kube-api-access-mjwtd") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "kube-api-access-mjwtd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.868854 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/934d8f16-46da-4779-8ab8-31b05d1e8b5c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-69wzc\" (UID: \"934d8f16-46da-4779-8ab8-31b05d1e8b5c\") " pod="openshift-ovn-kubernetes/ovnkube-node-69wzc" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.868923 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.869194 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/934d8f16-46da-4779-8ab8-31b05d1e8b5c-ovnkube-script-lib\") pod \"ovnkube-node-69wzc\" (UID: \"934d8f16-46da-4779-8ab8-31b05d1e8b5c\") " pod="openshift-ovn-kubernetes/ovnkube-node-69wzc" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.869243 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/934d8f16-46da-4779-8ab8-31b05d1e8b5c-host-kubelet\") pod \"ovnkube-node-69wzc\" (UID: \"934d8f16-46da-4779-8ab8-31b05d1e8b5c\") " pod="openshift-ovn-kubernetes/ovnkube-node-69wzc" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.869504 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fcb30c12-8b29-461d-ab3e-a76577b664d6-proxy-tls\") pod \"machine-config-daemon-w294k\" (UID: \"fcb30c12-8b29-461d-ab3e-a76577b664d6\") " pod="openshift-machine-config-operator/machine-config-daemon-w294k" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.869598 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/934d8f16-46da-4779-8ab8-31b05d1e8b5c-host-cni-netd\") pod \"ovnkube-node-69wzc\" (UID: \"934d8f16-46da-4779-8ab8-31b05d1e8b5c\") " pod="openshift-ovn-kubernetes/ovnkube-node-69wzc" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.869610 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.869736 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.869769 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d8c95a75-0c3b-4caa-9b09-30c6dca73e72-metrics-certs\") pod \"network-metrics-daemon-p4g92\" (UID: \"d8c95a75-0c3b-4caa-9b09-30c6dca73e72\") " pod="openshift-multus/network-metrics-daemon-p4g92" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.869816 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1e8c3045-7200-4b39-9531-5ce86ab0b5b5-multus-cni-dir\") pod \"multus-ztpws\" (UID: \"1e8c3045-7200-4b39-9531-5ce86ab0b5b5\") " pod="openshift-multus/multus-ztpws" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.869824 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities" (OuterVolumeSpecName: "utilities") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.869890 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvkpl\" (UniqueName: \"kubernetes.io/projected/1e8c3045-7200-4b39-9531-5ce86ab0b5b5-kube-api-access-qvkpl\") pod \"multus-ztpws\" (UID: \"1e8c3045-7200-4b39-9531-5ce86ab0b5b5\") " pod="openshift-multus/multus-ztpws" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.869936 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/934d8f16-46da-4779-8ab8-31b05d1e8b5c-node-log\") pod \"ovnkube-node-69wzc\" (UID: \"934d8f16-46da-4779-8ab8-31b05d1e8b5c\") " pod="openshift-ovn-kubernetes/ovnkube-node-69wzc" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.869965 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1e8c3045-7200-4b39-9531-5ce86ab0b5b5-host-var-lib-cni-bin\") pod \"multus-ztpws\" (UID: \"1e8c3045-7200-4b39-9531-5ce86ab0b5b5\") " pod="openshift-multus/multus-ztpws" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.870022 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/1e8c3045-7200-4b39-9531-5ce86ab0b5b5-host-var-lib-kubelet\") pod \"multus-ztpws\" (UID: \"1e8c3045-7200-4b39-9531-5ce86ab0b5b5\") " pod="openshift-multus/multus-ztpws" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.870103 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/1e8c3045-7200-4b39-9531-5ce86ab0b5b5-multus-conf-dir\") pod \"multus-ztpws\" (UID: \"1e8c3045-7200-4b39-9531-5ce86ab0b5b5\") " pod="openshift-multus/multus-ztpws" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.870133 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d8603a7b-d127-481c-8901-fff3b6f9f38b-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-wzxz4\" (UID: \"d8603a7b-d127-481c-8901-fff3b6f9f38b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-wzxz4" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.870257 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7" (OuterVolumeSpecName: "kube-api-access-tknt7") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "kube-api-access-tknt7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.870035 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.870475 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities" (OuterVolumeSpecName: "utilities") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.870848 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/934d8f16-46da-4779-8ab8-31b05d1e8b5c-run-ovn\") pod \"ovnkube-node-69wzc\" (UID: \"934d8f16-46da-4779-8ab8-31b05d1e8b5c\") " pod="openshift-ovn-kubernetes/ovnkube-node-69wzc" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.871043 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6" (OuterVolumeSpecName: "kube-api-access-pllx6") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "kube-api-access-pllx6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.871312 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/934d8f16-46da-4779-8ab8-31b05d1e8b5c-node-log\") pod \"ovnkube-node-69wzc\" (UID: \"934d8f16-46da-4779-8ab8-31b05d1e8b5c\") " pod="openshift-ovn-kubernetes/ovnkube-node-69wzc" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.871366 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/934d8f16-46da-4779-8ab8-31b05d1e8b5c-host-run-netns\") pod \"ovnkube-node-69wzc\" (UID: \"934d8f16-46da-4779-8ab8-31b05d1e8b5c\") " pod="openshift-ovn-kubernetes/ovnkube-node-69wzc" Dec 12 14:12:19 crc kubenswrapper[5108]: E1212 14:12:19.871801 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:20.371751715 +0000 UTC m=+93.279742874 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.871857 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6rmnv\" (UniqueName: \"kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.871951 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.871874 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ddlk9\" (UniqueName: \"kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.872013 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.872024 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8pskd\" (UniqueName: \"kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.872034 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xfp5s\" (UniqueName: \"kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.872061 5108 reconciler_common.go:299] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.872107 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.872072 5108 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.872198 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xxfcv\" (UniqueName: \"kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.872215 5108 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.872229 5108 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.872260 5108 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.872272 5108 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.872284 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zsb9b\" (UniqueName: \"kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.872294 5108 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.872312 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-m26jq\" (UniqueName: \"kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.872342 5108 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.872354 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qgrkj\" (UniqueName: \"kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.872365 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-w94wk\" (UniqueName: \"kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.872375 5108 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.872384 5108 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.872414 5108 reconciler_common.go:299] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.872426 5108 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.872428 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.872437 5108 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.872502 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xnxbn\" (UniqueName: \"kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.872518 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.872529 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nmmzf\" (UniqueName: \"kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.872540 5108 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.872550 5108 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.872561 5108 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.872570 5108 reconciler_common.go:299] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.872580 5108 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.872590 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7jjkz\" (UniqueName: \"kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.872573 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key" (OuterVolumeSpecName: "signing-key") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.872601 5108 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.872640 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l87hs\" (UniqueName: \"kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.872654 5108 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.872684 5108 reconciler_common.go:299] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.872695 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pgx6b\" (UniqueName: \"kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.872705 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dztfv\" (UniqueName: \"kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.872715 5108 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.872726 5108 reconciler_common.go:299] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.872736 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-99zj9\" (UniqueName: \"kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.872764 5108 reconciler_common.go:299] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.872775 5108 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.872788 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d7cps\" (UniqueName: \"kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.872799 5108 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.872808 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.872819 5108 reconciler_common.go:299] "Volume detached for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.872846 5108 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.872856 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6dmhf\" (UniqueName: \"kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.872867 5108 reconciler_common.go:299] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.872877 5108 reconciler_common.go:299] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.872887 5108 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: E1212 14:12:19.872938 5108 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.872989 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config" (OuterVolumeSpecName: "config") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: E1212 14:12:19.873576 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d8c95a75-0c3b-4caa-9b09-30c6dca73e72-metrics-certs podName:d8c95a75-0c3b-4caa-9b09-30c6dca73e72 nodeName:}" failed. No retries permitted until 2025-12-12 14:12:20.372984259 +0000 UTC m=+93.280975418 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d8c95a75-0c3b-4caa-9b09-30c6dca73e72-metrics-certs") pod "network-metrics-daemon-p4g92" (UID: "d8c95a75-0c3b-4caa-9b09-30c6dca73e72") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.876043 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-p4g92" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8c95a75-0c3b-4caa-9b09-30c6dca73e72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:12:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:12:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:12:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmnpq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmnpq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T14:12:19Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-p4g92\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.878660 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/934d8f16-46da-4779-8ab8-31b05d1e8b5c-ovn-node-metrics-cert\") pod \"ovnkube-node-69wzc\" (UID: \"934d8f16-46da-4779-8ab8-31b05d1e8b5c\") " pod="openshift-ovn-kubernetes/ovnkube-node-69wzc" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.879019 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.879179 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d8603a7b-d127-481c-8901-fff3b6f9f38b-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-wzxz4\" (UID: \"d8603a7b-d127-481c-8901-fff3b6f9f38b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-wzxz4" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.879606 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gmnpq\" (UniqueName: \"kubernetes.io/projected/d8c95a75-0c3b-4caa-9b09-30c6dca73e72-kube-api-access-gmnpq\") pod \"network-metrics-daemon-p4g92\" (UID: \"d8c95a75-0c3b-4caa-9b09-30c6dca73e72\") " pod="openshift-multus/network-metrics-daemon-p4g92" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.880732 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-965xl\" (UniqueName: \"kubernetes.io/projected/fcb30c12-8b29-461d-ab3e-a76577b664d6-kube-api-access-965xl\") pod \"machine-config-daemon-w294k\" (UID: \"fcb30c12-8b29-461d-ab3e-a76577b664d6\") " pod="openshift-machine-config-operator/machine-config-daemon-w294k" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.883864 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qw6q8\" (UniqueName: \"kubernetes.io/projected/934d8f16-46da-4779-8ab8-31b05d1e8b5c-kube-api-access-qw6q8\") pod \"ovnkube-node-69wzc\" (UID: \"934d8f16-46da-4779-8ab8-31b05d1e8b5c\") " pod="openshift-ovn-kubernetes/ovnkube-node-69wzc" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.884933 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ndw6\" (UniqueName: \"kubernetes.io/projected/65a61526-11c5-4c70-ae85-c126f893efd8-kube-api-access-2ndw6\") pod \"multus-additional-cni-plugins-ctxlm\" (UID: \"65a61526-11c5-4c70-ae85-c126f893efd8\") " pod="openshift-multus/multus-additional-cni-plugins-ctxlm" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.885241 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7tz82\" (UniqueName: \"kubernetes.io/projected/d8603a7b-d127-481c-8901-fff3b6f9f38b-kube-api-access-7tz82\") pod \"ovnkube-control-plane-57b78d8988-wzxz4\" (UID: \"d8603a7b-d127-481c-8901-fff3b6f9f38b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-wzxz4" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.885322 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-tgfnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93177b5a-a677-4e5b-ac8d-cf64e68bd1d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:12:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:12:19Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:12:19Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vl29g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T14:12:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tgfnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.886385 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vl29g\" (UniqueName: \"kubernetes.io/projected/93177b5a-a677-4e5b-ac8d-cf64e68bd1d9-kube-api-access-vl29g\") pod \"node-resolver-tgfnd\" (UID: \"93177b5a-a677-4e5b-ac8d-cf64e68bd1d9\") " pod="openshift-dns/node-resolver-tgfnd" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.895024 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T14:12:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T14:12:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.905636 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T14:12:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T14:12:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.925572 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.925615 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.925626 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.925641 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.925655 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:12:19Z","lastTransitionTime":"2025-12-12T14:12:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.929592 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-9l7sp" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.936323 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities" (OuterVolumeSpecName: "utilities") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.936575 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg" (OuterVolumeSpecName: "kube-api-access-wbmqg") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "kube-api-access-wbmqg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.936608 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem" (OuterVolumeSpecName: "ca-trust-extracted-pem") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "ca-trust-extracted-pem". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.937772 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-69wzc" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.938262 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp" (OuterVolumeSpecName: "tmp") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.938598 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.939158 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv" (OuterVolumeSpecName: "kube-api-access-pddnv") pod "e093be35-bb62-4843-b2e8-094545761610" (UID: "e093be35-bb62-4843-b2e8-094545761610"). InnerVolumeSpecName "kube-api-access-pddnv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.939235 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.939359 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.939398 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf" (OuterVolumeSpecName: "kube-api-access-q4smf") pod "0dd0fbac-8c0d-4228-8faa-abbeedabf7db" (UID: "0dd0fbac-8c0d-4228-8faa-abbeedabf7db"). InnerVolumeSpecName "kube-api-access-q4smf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.939545 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.939275 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq" (OuterVolumeSpecName: "kube-api-access-d4tqq") pod "6ee8fbd3-1f81-4666-96da-5afc70819f1a" (UID: "6ee8fbd3-1f81-4666-96da-5afc70819f1a"). InnerVolumeSpecName "kube-api-access-d4tqq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.939887 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.940102 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h" (OuterVolumeSpecName: "kube-api-access-94l9h") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "kube-api-access-94l9h". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.940169 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.940404 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" (UID: "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.940724 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.941651 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.941857 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.941886 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert" (OuterVolumeSpecName: "cert") pod "a52afe44-fb37-46ed-a1f8-bf39727a3cbe" (UID: "a52afe44-fb37-46ed-a1f8-bf39727a3cbe"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.941890 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.941884 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.941918 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.941934 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities" (OuterVolumeSpecName: "utilities") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.942607 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities" (OuterVolumeSpecName: "utilities") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.942954 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.943140 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.943685 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "0dd0fbac-8c0d-4228-8faa-abbeedabf7db" (UID: "0dd0fbac-8c0d-4228-8faa-abbeedabf7db"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.943743 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.944011 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2" (OuterVolumeSpecName: "kube-api-access-ks6v2") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "kube-api-access-ks6v2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.944068 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.944498 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.944531 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.944763 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "f7e2c886-118e-43bb-bef1-c78134de392b" (UID: "f7e2c886-118e-43bb-bef1-c78134de392b"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.945601 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.946053 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9" (OuterVolumeSpecName: "kube-api-access-9vsz9") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "kube-api-access-9vsz9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.945428 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config" (OuterVolumeSpecName: "config") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.946490 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.946660 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.946784 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.946851 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf" (OuterVolumeSpecName: "kube-api-access-ptkcf") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "kube-api-access-ptkcf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.947211 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp" (OuterVolumeSpecName: "tmp") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.947332 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.947381 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config" (OuterVolumeSpecName: "config") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.947529 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config" (OuterVolumeSpecName: "config") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.947555 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.948449 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.949660 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp" (OuterVolumeSpecName: "tmp") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.949784 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "a208c9c2-333b-4b4a-be0d-bc32ec38a821" (UID: "a208c9c2-333b-4b4a-be0d-bc32ec38a821"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.951880 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.950265 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6" (OuterVolumeSpecName: "kube-api-access-ftwb6") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "kube-api-access-ftwb6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.950282 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.950285 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts" (OuterVolumeSpecName: "kube-api-access-4g8ts") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "kube-api-access-4g8ts". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.950295 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.950319 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc" (OuterVolumeSpecName: "kube-api-access-zg8nc") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "kube-api-access-zg8nc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.950474 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.950499 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.950512 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w" (OuterVolumeSpecName: "kube-api-access-rzt4w") pod "a52afe44-fb37-46ed-a1f8-bf39727a3cbe" (UID: "a52afe44-fb37-46ed-a1f8-bf39727a3cbe"). InnerVolumeSpecName "kube-api-access-rzt4w". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.950618 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6" (OuterVolumeSpecName: "kube-api-access-tkdh6") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "kube-api-access-tkdh6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.950636 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.951311 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca" (OuterVolumeSpecName: "service-ca") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.951808 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config" (OuterVolumeSpecName: "config") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.952204 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "42a11a02-47e1-488f-b270-2679d3298b0e" (UID: "42a11a02-47e1-488f-b270-2679d3298b0e"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.952254 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.952359 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.954473 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw" (OuterVolumeSpecName: "kube-api-access-5lcfw") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "kube-api-access-5lcfw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.954521 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l" (OuterVolumeSpecName: "kube-api-access-sbc2l") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "kube-api-access-sbc2l". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.954037 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.954051 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr" (OuterVolumeSpecName: "kube-api-access-wj4qr") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "kube-api-access-wj4qr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.954331 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.954388 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz" (OuterVolumeSpecName: "kube-api-access-grwfz") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "kube-api-access-grwfz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.954457 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities" (OuterVolumeSpecName: "utilities") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.954883 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities" (OuterVolumeSpecName: "utilities") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.955292 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config" (OuterVolumeSpecName: "config") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.955688 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: W1212 14:12:19.958285 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod934d8f16_46da_4779_8ab8_31b05d1e8b5c.slice/crio-b992142de4239b8b21dcd0596986f91d86fd70204d0fbf8258995c96c7f0ca90 WatchSource:0}: Error finding container b992142de4239b8b21dcd0596986f91d86fd70204d0fbf8258995c96c7f0ca90: Status 404 returned error can't find the container with id b992142de4239b8b21dcd0596986f91d86fd70204d0fbf8258995c96c7f0ca90 Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.960327 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.961184 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.962509 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.962524 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-wzxz4" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.962838 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.962839 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m" (OuterVolumeSpecName: "kube-api-access-4hb7m") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "kube-api-access-4hb7m". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.962961 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.962991 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz" (OuterVolumeSpecName: "kube-api-access-ws8zz") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "kube-api-access-ws8zz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.963572 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj" (OuterVolumeSpecName: "kube-api-access-mfzkj") pod "0effdbcf-dd7d-404d-9d48-77536d665a5d" (UID: "0effdbcf-dd7d-404d-9d48-77536d665a5d"). InnerVolumeSpecName "kube-api-access-mfzkj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.963571 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk" (OuterVolumeSpecName: "kube-api-access-qqbfk") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "kube-api-access-qqbfk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.964032 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl" (OuterVolumeSpecName: "kube-api-access-twvbl") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "kube-api-access-twvbl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.964304 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.964407 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.964790 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.965520 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.965546 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config" (OuterVolumeSpecName: "config") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.965560 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp" (OuterVolumeSpecName: "kube-api-access-8nspp") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "kube-api-access-8nspp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.966001 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7" (OuterVolumeSpecName: "kube-api-access-hm9x7") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "kube-api-access-hm9x7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.966375 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.966559 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx" (OuterVolumeSpecName: "kube-api-access-l9stx") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "kube-api-access-l9stx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.966615 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.966938 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.966931 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.968188 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr" (OuterVolumeSpecName: "kube-api-access-6g4lr") pod "f7e2c886-118e-43bb-bef1-c78134de392b" (UID: "f7e2c886-118e-43bb-bef1-c78134de392b"). InnerVolumeSpecName "kube-api-access-6g4lr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.968299 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config" (OuterVolumeSpecName: "config") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.969211 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.969995 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config" (OuterVolumeSpecName: "config") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.970243 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit" (OuterVolumeSpecName: "audit") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.970461 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh" (OuterVolumeSpecName: "kube-api-access-m5lgh") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "kube-api-access-m5lgh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.970557 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images" (OuterVolumeSpecName: "images") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.971140 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.971165 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume" (OuterVolumeSpecName: "config-volume") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.971167 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.971866 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.973280 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.973822 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/1e8c3045-7200-4b39-9531-5ce86ab0b5b5-cni-binary-copy\") pod \"multus-ztpws\" (UID: \"1e8c3045-7200-4b39-9531-5ce86ab0b5b5\") " pod="openshift-multus/multus-ztpws" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.973850 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/1e8c3045-7200-4b39-9531-5ce86ab0b5b5-host-run-multus-certs\") pod \"multus-ztpws\" (UID: \"1e8c3045-7200-4b39-9531-5ce86ab0b5b5\") " pod="openshift-multus/multus-ztpws" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.973889 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1e8c3045-7200-4b39-9531-5ce86ab0b5b5-multus-cni-dir\") pod \"multus-ztpws\" (UID: \"1e8c3045-7200-4b39-9531-5ce86ab0b5b5\") " pod="openshift-multus/multus-ztpws" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.973905 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qvkpl\" (UniqueName: \"kubernetes.io/projected/1e8c3045-7200-4b39-9531-5ce86ab0b5b5-kube-api-access-qvkpl\") pod \"multus-ztpws\" (UID: \"1e8c3045-7200-4b39-9531-5ce86ab0b5b5\") " pod="openshift-multus/multus-ztpws" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.973924 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1e8c3045-7200-4b39-9531-5ce86ab0b5b5-host-var-lib-cni-bin\") pod \"multus-ztpws\" (UID: \"1e8c3045-7200-4b39-9531-5ce86ab0b5b5\") " pod="openshift-multus/multus-ztpws" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.973940 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/1e8c3045-7200-4b39-9531-5ce86ab0b5b5-host-var-lib-kubelet\") pod \"multus-ztpws\" (UID: \"1e8c3045-7200-4b39-9531-5ce86ab0b5b5\") " pod="openshift-multus/multus-ztpws" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.973956 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/1e8c3045-7200-4b39-9531-5ce86ab0b5b5-multus-conf-dir\") pod \"multus-ztpws\" (UID: \"1e8c3045-7200-4b39-9531-5ce86ab0b5b5\") " pod="openshift-multus/multus-ztpws" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.973983 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/1e8c3045-7200-4b39-9531-5ce86ab0b5b5-host-run-k8s-cni-cncf-io\") pod \"multus-ztpws\" (UID: \"1e8c3045-7200-4b39-9531-5ce86ab0b5b5\") " pod="openshift-multus/multus-ztpws" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.974017 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/1e8c3045-7200-4b39-9531-5ce86ab0b5b5-os-release\") pod \"multus-ztpws\" (UID: \"1e8c3045-7200-4b39-9531-5ce86ab0b5b5\") " pod="openshift-multus/multus-ztpws" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.974033 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/1e8c3045-7200-4b39-9531-5ce86ab0b5b5-hostroot\") pod \"multus-ztpws\" (UID: \"1e8c3045-7200-4b39-9531-5ce86ab0b5b5\") " pod="openshift-multus/multus-ztpws" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.974052 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1e8c3045-7200-4b39-9531-5ce86ab0b5b5-system-cni-dir\") pod \"multus-ztpws\" (UID: \"1e8c3045-7200-4b39-9531-5ce86ab0b5b5\") " pod="openshift-multus/multus-ztpws" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.974069 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/1e8c3045-7200-4b39-9531-5ce86ab0b5b5-host-var-lib-cni-multus\") pod \"multus-ztpws\" (UID: \"1e8c3045-7200-4b39-9531-5ce86ab0b5b5\") " pod="openshift-multus/multus-ztpws" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.974122 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1e8c3045-7200-4b39-9531-5ce86ab0b5b5-etc-kubernetes\") pod \"multus-ztpws\" (UID: \"1e8c3045-7200-4b39-9531-5ce86ab0b5b5\") " pod="openshift-multus/multus-ztpws" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.974172 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/1e8c3045-7200-4b39-9531-5ce86ab0b5b5-multus-socket-dir-parent\") pod \"multus-ztpws\" (UID: \"1e8c3045-7200-4b39-9531-5ce86ab0b5b5\") " pod="openshift-multus/multus-ztpws" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.974198 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/1e8c3045-7200-4b39-9531-5ce86ab0b5b5-cnibin\") pod \"multus-ztpws\" (UID: \"1e8c3045-7200-4b39-9531-5ce86ab0b5b5\") " pod="openshift-multus/multus-ztpws" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.974200 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.974222 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/1e8c3045-7200-4b39-9531-5ce86ab0b5b5-multus-daemon-config\") pod \"multus-ztpws\" (UID: \"1e8c3045-7200-4b39-9531-5ce86ab0b5b5\") " pod="openshift-multus/multus-ztpws" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.974263 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1e8c3045-7200-4b39-9531-5ce86ab0b5b5-host-run-netns\") pod \"multus-ztpws\" (UID: \"1e8c3045-7200-4b39-9531-5ce86ab0b5b5\") " pod="openshift-multus/multus-ztpws" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.974272 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/1e8c3045-7200-4b39-9531-5ce86ab0b5b5-host-run-k8s-cni-cncf-io\") pod \"multus-ztpws\" (UID: \"1e8c3045-7200-4b39-9531-5ce86ab0b5b5\") " pod="openshift-multus/multus-ztpws" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.974356 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1e8c3045-7200-4b39-9531-5ce86ab0b5b5-etc-kubernetes\") pod \"multus-ztpws\" (UID: \"1e8c3045-7200-4b39-9531-5ce86ab0b5b5\") " pod="openshift-multus/multus-ztpws" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.974505 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw" (OuterVolumeSpecName: "kube-api-access-9z4sw") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "kube-api-access-9z4sw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.974523 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/1e8c3045-7200-4b39-9531-5ce86ab0b5b5-multus-socket-dir-parent\") pod \"multus-ztpws\" (UID: \"1e8c3045-7200-4b39-9531-5ce86ab0b5b5\") " pod="openshift-multus/multus-ztpws" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.974575 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/1e8c3045-7200-4b39-9531-5ce86ab0b5b5-cnibin\") pod \"multus-ztpws\" (UID: \"1e8c3045-7200-4b39-9531-5ce86ab0b5b5\") " pod="openshift-multus/multus-ztpws" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.974615 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/1e8c3045-7200-4b39-9531-5ce86ab0b5b5-host-var-lib-cni-multus\") pod \"multus-ztpws\" (UID: \"1e8c3045-7200-4b39-9531-5ce86ab0b5b5\") " pod="openshift-multus/multus-ztpws" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.974616 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1e8c3045-7200-4b39-9531-5ce86ab0b5b5-system-cni-dir\") pod \"multus-ztpws\" (UID: \"1e8c3045-7200-4b39-9531-5ce86ab0b5b5\") " pod="openshift-multus/multus-ztpws" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.974358 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/1e8c3045-7200-4b39-9531-5ce86ab0b5b5-multus-conf-dir\") pod \"multus-ztpws\" (UID: \"1e8c3045-7200-4b39-9531-5ce86ab0b5b5\") " pod="openshift-multus/multus-ztpws" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.974632 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.974688 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1e8c3045-7200-4b39-9531-5ce86ab0b5b5-multus-cni-dir\") pod \"multus-ztpws\" (UID: \"1e8c3045-7200-4b39-9531-5ce86ab0b5b5\") " pod="openshift-multus/multus-ztpws" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.974729 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1e8c3045-7200-4b39-9531-5ce86ab0b5b5-host-run-netns\") pod \"multus-ztpws\" (UID: \"1e8c3045-7200-4b39-9531-5ce86ab0b5b5\") " pod="openshift-multus/multus-ztpws" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.974740 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/1e8c3045-7200-4b39-9531-5ce86ab0b5b5-os-release\") pod \"multus-ztpws\" (UID: \"1e8c3045-7200-4b39-9531-5ce86ab0b5b5\") " pod="openshift-multus/multus-ztpws" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.974778 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/1e8c3045-7200-4b39-9531-5ce86ab0b5b5-hostroot\") pod \"multus-ztpws\" (UID: \"1e8c3045-7200-4b39-9531-5ce86ab0b5b5\") " pod="openshift-multus/multus-ztpws" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.974810 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/1e8c3045-7200-4b39-9531-5ce86ab0b5b5-host-run-multus-certs\") pod \"multus-ztpws\" (UID: \"1e8c3045-7200-4b39-9531-5ce86ab0b5b5\") " pod="openshift-multus/multus-ztpws" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.974839 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1e8c3045-7200-4b39-9531-5ce86ab0b5b5-host-var-lib-cni-bin\") pod \"multus-ztpws\" (UID: \"1e8c3045-7200-4b39-9531-5ce86ab0b5b5\") " pod="openshift-multus/multus-ztpws" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.975175 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/1e8c3045-7200-4b39-9531-5ce86ab0b5b5-host-var-lib-kubelet\") pod \"multus-ztpws\" (UID: \"1e8c3045-7200-4b39-9531-5ce86ab0b5b5\") " pod="openshift-multus/multus-ztpws" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.975210 5108 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.975225 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8nspp\" (UniqueName: \"kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.975237 5108 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.975249 5108 reconciler_common.go:299] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.975260 5108 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.975270 5108 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.975282 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.975291 5108 reconciler_common.go:299] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.975300 5108 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.975310 5108 reconciler_common.go:299] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.975231 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/1e8c3045-7200-4b39-9531-5ce86ab0b5b5-multus-daemon-config\") pod \"multus-ztpws\" (UID: \"1e8c3045-7200-4b39-9531-5ce86ab0b5b5\") " pod="openshift-multus/multus-ztpws" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.975320 5108 reconciler_common.go:299] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.975384 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-tgfnd" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.975483 5108 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.975559 5108 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.975577 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ptkcf\" (UniqueName: \"kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.975589 5108 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.975601 5108 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.975615 5108 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.975627 5108 reconciler_common.go:299] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.975738 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.975831 5108 reconciler_common.go:299] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.975849 5108 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.975862 5108 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.975874 5108 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.975887 5108 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.975900 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mfzkj\" (UniqueName: \"kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.975911 5108 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.975923 5108 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.975935 5108 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.975947 5108 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.975960 5108 reconciler_common.go:299] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.975973 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tknt7\" (UniqueName: \"kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.975987 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hm9x7\" (UniqueName: \"kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.975999 5108 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.976115 5108 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.976130 5108 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.976143 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d4tqq\" (UniqueName: \"kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.976156 5108 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.976168 5108 reconciler_common.go:299] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.976182 5108 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.976196 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qqbfk\" (UniqueName: \"kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.976210 5108 reconciler_common.go:299] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.976222 5108 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.976228 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/1e8c3045-7200-4b39-9531-5ce86ab0b5b5-cni-binary-copy\") pod \"multus-ztpws\" (UID: \"1e8c3045-7200-4b39-9531-5ce86ab0b5b5\") " pod="openshift-multus/multus-ztpws" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.976234 5108 reconciler_common.go:299] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.976269 5108 reconciler_common.go:299] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.976211 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg" (OuterVolumeSpecName: "kube-api-access-hckvg") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "kube-api-access-hckvg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.976285 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.976281 5108 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.976361 5108 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.976374 5108 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.976594 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.976694 5108 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.976973 5108 reconciler_common.go:299] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.977008 5108 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.977022 5108 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.977032 5108 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.977043 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-26xrl\" (UniqueName: \"kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.977058 5108 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.977746 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-94l9h\" (UniqueName: \"kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.977757 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4g8ts\" (UniqueName: \"kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.977766 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9vsz9\" (UniqueName: \"kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.977778 5108 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.977788 5108 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.978552 5108 reconciler_common.go:299] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.978564 5108 reconciler_common.go:299] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.978573 5108 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.977915 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.978752 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.978966 5108 reconciler_common.go:299] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.979058 5108 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.979139 5108 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.979168 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-m5lgh\" (UniqueName: \"kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.979178 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6g4lr\" (UniqueName: \"kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.979187 5108 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.979197 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l9stx\" (UniqueName: \"kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.979209 5108 reconciler_common.go:299] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.979218 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-q4smf\" (UniqueName: \"kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.979248 5108 reconciler_common.go:299] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.979259 5108 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.979268 5108 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.979547 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wbmqg\" (UniqueName: \"kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.979566 5108 reconciler_common.go:299] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.979578 5108 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.979802 5108 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.979823 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zg8nc\" (UniqueName: \"kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.979856 5108 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.979920 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rzt4w\" (UniqueName: \"kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.979931 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5lcfw\" (UniqueName: \"kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.979941 5108 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.979980 5108 reconciler_common.go:299] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.979995 5108 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.980008 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-sbc2l\" (UniqueName: \"kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.980019 5108 reconciler_common.go:299] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.980031 5108 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.980066 5108 reconciler_common.go:299] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.980113 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-grwfz\" (UniqueName: \"kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.980129 5108 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.980141 5108 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.980157 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-twvbl\" (UniqueName: \"kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.980169 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tkdh6\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.980209 5108 reconciler_common.go:299] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.980222 5108 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.980235 5108 reconciler_common.go:299] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.980247 5108 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.980281 5108 reconciler_common.go:299] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.980297 5108 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.980314 5108 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.980326 5108 reconciler_common.go:299] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.980338 5108 reconciler_common.go:299] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.980375 5108 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.980392 5108 reconciler_common.go:299] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.980403 5108 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.980416 5108 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.980454 5108 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.980468 5108 reconciler_common.go:299] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.980481 5108 reconciler_common.go:299] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.980492 5108 reconciler_common.go:299] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.980504 5108 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.980539 5108 reconciler_common.go:299] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.980554 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wj4qr\" (UniqueName: \"kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.980567 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pddnv\" (UniqueName: \"kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.980579 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4hb7m\" (UniqueName: \"kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.980593 5108 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.980629 5108 reconciler_common.go:299] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.980642 5108 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.980654 5108 reconciler_common.go:299] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.980666 5108 reconciler_common.go:299] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.980679 5108 reconciler_common.go:299] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.980716 5108 reconciler_common.go:299] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.980730 5108 reconciler_common.go:299] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.980744 5108 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.980757 5108 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.980792 5108 reconciler_common.go:299] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.980805 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ks6v2\" (UniqueName: \"kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.980819 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ws8zz\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.980832 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mjwtd\" (UniqueName: \"kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.980846 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ftwb6\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.980882 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pllx6\" (UniqueName: \"kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.980895 5108 reconciler_common.go:299] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.981158 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs" (OuterVolumeSpecName: "certs") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.981415 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.985087 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-w294k" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.985666 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.985706 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t" (OuterVolumeSpecName: "kube-api-access-zth6t") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "kube-api-access-zth6t". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.985853 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.985855 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.985856 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config" (OuterVolumeSpecName: "console-config") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.985984 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.986309 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.986211 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca" (OuterVolumeSpecName: "service-ca") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.986552 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca" (OuterVolumeSpecName: "serviceca") pod "5ebfebf6-3ecd-458e-943f-bb25b52e2718" (UID: "5ebfebf6-3ecd-458e-943f-bb25b52e2718"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.986731 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.987147 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.987078 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.987313 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.987857 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.987910 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr" (OuterVolumeSpecName: "kube-api-access-z5rsr") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "kube-api-access-z5rsr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.988103 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.989172 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.989474 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.989621 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.989756 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.991669 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config" (OuterVolumeSpecName: "config") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.991981 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c" (OuterVolumeSpecName: "kube-api-access-8nb9c") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "kube-api-access-8nb9c". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.993044 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.994621 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.995246 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-ctxlm" Dec 12 14:12:19 crc kubenswrapper[5108]: I1212 14:12:19.996527 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvkpl\" (UniqueName: \"kubernetes.io/projected/1e8c3045-7200-4b39-9531-5ce86ab0b5b5-kube-api-access-qvkpl\") pod \"multus-ztpws\" (UID: \"1e8c3045-7200-4b39-9531-5ce86ab0b5b5\") " pod="openshift-multus/multus-ztpws" Dec 12 14:12:20 crc kubenswrapper[5108]: I1212 14:12:20.001354 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:12:20 crc kubenswrapper[5108]: I1212 14:12:20.001457 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:12:20 crc kubenswrapper[5108]: I1212 14:12:20.003890 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-ztpws" Dec 12 14:12:20 crc kubenswrapper[5108]: I1212 14:12:20.012759 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:12:20 crc kubenswrapper[5108]: I1212 14:12:20.028863 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:12:20 crc kubenswrapper[5108]: I1212 14:12:20.028905 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:12:20 crc kubenswrapper[5108]: I1212 14:12:20.028916 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:12:20 crc kubenswrapper[5108]: I1212 14:12:20.028930 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:12:20 crc kubenswrapper[5108]: I1212 14:12:20.028939 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:12:20Z","lastTransitionTime":"2025-12-12T14:12:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:12:20 crc kubenswrapper[5108]: I1212 14:12:20.029797 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:12:20 crc kubenswrapper[5108]: I1212 14:12:20.081963 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 14:12:20 crc kubenswrapper[5108]: I1212 14:12:20.082024 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 12 14:12:20 crc kubenswrapper[5108]: I1212 14:12:20.082047 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 12 14:12:20 crc kubenswrapper[5108]: I1212 14:12:20.082088 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 14:12:20 crc kubenswrapper[5108]: I1212 14:12:20.082136 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 14:12:20 crc kubenswrapper[5108]: I1212 14:12:20.082174 5108 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:20 crc kubenswrapper[5108]: I1212 14:12:20.082192 5108 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:20 crc kubenswrapper[5108]: I1212 14:12:20.082206 5108 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:20 crc kubenswrapper[5108]: I1212 14:12:20.082218 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8nb9c\" (UniqueName: \"kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:20 crc kubenswrapper[5108]: I1212 14:12:20.082231 5108 reconciler_common.go:299] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:20 crc kubenswrapper[5108]: I1212 14:12:20.082244 5108 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:20 crc kubenswrapper[5108]: I1212 14:12:20.082255 5108 reconciler_common.go:299] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:20 crc kubenswrapper[5108]: I1212 14:12:20.082266 5108 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:20 crc kubenswrapper[5108]: I1212 14:12:20.082274 5108 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:20 crc kubenswrapper[5108]: I1212 14:12:20.082282 5108 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:20 crc kubenswrapper[5108]: I1212 14:12:20.082290 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hckvg\" (UniqueName: \"kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:20 crc kubenswrapper[5108]: I1212 14:12:20.082299 5108 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:20 crc kubenswrapper[5108]: I1212 14:12:20.082308 5108 reconciler_common.go:299] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:20 crc kubenswrapper[5108]: I1212 14:12:20.082319 5108 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:20 crc kubenswrapper[5108]: I1212 14:12:20.082330 5108 reconciler_common.go:299] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:20 crc kubenswrapper[5108]: I1212 14:12:20.082340 5108 reconciler_common.go:299] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:20 crc kubenswrapper[5108]: I1212 14:12:20.082351 5108 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:20 crc kubenswrapper[5108]: I1212 14:12:20.082363 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9z4sw\" (UniqueName: \"kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:20 crc kubenswrapper[5108]: I1212 14:12:20.082376 5108 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:20 crc kubenswrapper[5108]: I1212 14:12:20.082411 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-z5rsr\" (UniqueName: \"kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:20 crc kubenswrapper[5108]: I1212 14:12:20.082419 5108 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:20 crc kubenswrapper[5108]: I1212 14:12:20.082427 5108 reconciler_common.go:299] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:20 crc kubenswrapper[5108]: I1212 14:12:20.082436 5108 reconciler_common.go:299] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:20 crc kubenswrapper[5108]: I1212 14:12:20.082444 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zth6t\" (UniqueName: \"kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:20 crc kubenswrapper[5108]: I1212 14:12:20.082453 5108 reconciler_common.go:299] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:20 crc kubenswrapper[5108]: I1212 14:12:20.082460 5108 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:20 crc kubenswrapper[5108]: I1212 14:12:20.082469 5108 reconciler_common.go:299] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:20 crc kubenswrapper[5108]: I1212 14:12:20.082477 5108 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:20 crc kubenswrapper[5108]: I1212 14:12:20.082486 5108 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:20 crc kubenswrapper[5108]: I1212 14:12:20.082495 5108 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:20 crc kubenswrapper[5108]: I1212 14:12:20.082506 5108 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:20 crc kubenswrapper[5108]: I1212 14:12:20.082517 5108 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:20 crc kubenswrapper[5108]: I1212 14:12:20.082529 5108 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:20 crc kubenswrapper[5108]: I1212 14:12:20.082540 5108 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:20 crc kubenswrapper[5108]: I1212 14:12:20.082648 5108 reconciler_common.go:299] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:20 crc kubenswrapper[5108]: I1212 14:12:20.082661 5108 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:20 crc kubenswrapper[5108]: I1212 14:12:20.082672 5108 reconciler_common.go:299] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:20 crc kubenswrapper[5108]: I1212 14:12:20.082684 5108 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:20 crc kubenswrapper[5108]: E1212 14:12:20.082802 5108 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 12 14:12:20 crc kubenswrapper[5108]: E1212 14:12:20.082819 5108 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 12 14:12:20 crc kubenswrapper[5108]: E1212 14:12:20.082830 5108 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 14:12:20 crc kubenswrapper[5108]: E1212 14:12:20.082886 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2025-12-12 14:12:21.082869661 +0000 UTC m=+93.990860820 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 14:12:20 crc kubenswrapper[5108]: E1212 14:12:20.083259 5108 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 12 14:12:20 crc kubenswrapper[5108]: E1212 14:12:20.083310 5108 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 12 14:12:20 crc kubenswrapper[5108]: E1212 14:12:20.083350 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-12 14:12:21.083330133 +0000 UTC m=+93.991321292 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 12 14:12:20 crc kubenswrapper[5108]: E1212 14:12:20.083408 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-12 14:12:21.083383704 +0000 UTC m=+93.991374863 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 12 14:12:20 crc kubenswrapper[5108]: E1212 14:12:20.083468 5108 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 12 14:12:20 crc kubenswrapper[5108]: E1212 14:12:20.083482 5108 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 12 14:12:20 crc kubenswrapper[5108]: E1212 14:12:20.083494 5108 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 14:12:20 crc kubenswrapper[5108]: E1212 14:12:20.083537 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2025-12-12 14:12:21.083527078 +0000 UTC m=+93.991518227 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 14:12:20 crc kubenswrapper[5108]: I1212 14:12:20.084249 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 12 14:12:20 crc kubenswrapper[5108]: I1212 14:12:20.130316 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:12:20 crc kubenswrapper[5108]: I1212 14:12:20.130620 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:12:20 crc kubenswrapper[5108]: I1212 14:12:20.130629 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:12:20 crc kubenswrapper[5108]: I1212 14:12:20.130646 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:12:20 crc kubenswrapper[5108]: I1212 14:12:20.130655 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:12:20Z","lastTransitionTime":"2025-12-12T14:12:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:12:20 crc kubenswrapper[5108]: I1212 14:12:20.204229 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 12 14:12:20 crc kubenswrapper[5108]: I1212 14:12:20.234651 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:12:20 crc kubenswrapper[5108]: I1212 14:12:20.234705 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:12:20 crc kubenswrapper[5108]: I1212 14:12:20.234718 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:12:20 crc kubenswrapper[5108]: I1212 14:12:20.234738 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:12:20 crc kubenswrapper[5108]: I1212 14:12:20.234754 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:12:20Z","lastTransitionTime":"2025-12-12T14:12:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:12:20 crc kubenswrapper[5108]: W1212 14:12:20.239853 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod428b39f5_eb1c_4f65_b7a4_eeb6e84860cc.slice/crio-e3d96383e20cad1dd6dd063c03fb61bcdda49770d8db9b689ae2daeb5d79c46f WatchSource:0}: Error finding container e3d96383e20cad1dd6dd063c03fb61bcdda49770d8db9b689ae2daeb5d79c46f: Status 404 returned error can't find the container with id e3d96383e20cad1dd6dd063c03fb61bcdda49770d8db9b689ae2daeb5d79c46f Dec 12 14:12:20 crc kubenswrapper[5108]: I1212 14:12:20.337264 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:12:20 crc kubenswrapper[5108]: I1212 14:12:20.337324 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:12:20 crc kubenswrapper[5108]: I1212 14:12:20.337338 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:12:20 crc kubenswrapper[5108]: I1212 14:12:20.337353 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:12:20 crc kubenswrapper[5108]: I1212 14:12:20.337364 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:12:20Z","lastTransitionTime":"2025-12-12T14:12:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:12:20 crc kubenswrapper[5108]: I1212 14:12:20.386691 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:20 crc kubenswrapper[5108]: E1212 14:12:20.386919 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:21.386881069 +0000 UTC m=+94.294872228 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:20 crc kubenswrapper[5108]: I1212 14:12:20.387100 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d8c95a75-0c3b-4caa-9b09-30c6dca73e72-metrics-certs\") pod \"network-metrics-daemon-p4g92\" (UID: \"d8c95a75-0c3b-4caa-9b09-30c6dca73e72\") " pod="openshift-multus/network-metrics-daemon-p4g92" Dec 12 14:12:20 crc kubenswrapper[5108]: E1212 14:12:20.387271 5108 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 12 14:12:20 crc kubenswrapper[5108]: E1212 14:12:20.387375 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d8c95a75-0c3b-4caa-9b09-30c6dca73e72-metrics-certs podName:d8c95a75-0c3b-4caa-9b09-30c6dca73e72 nodeName:}" failed. No retries permitted until 2025-12-12 14:12:21.387352121 +0000 UTC m=+94.295343280 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d8c95a75-0c3b-4caa-9b09-30c6dca73e72-metrics-certs") pod "network-metrics-daemon-p4g92" (UID: "d8c95a75-0c3b-4caa-9b09-30c6dca73e72") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 12 14:12:20 crc kubenswrapper[5108]: I1212 14:12:20.439879 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:12:20 crc kubenswrapper[5108]: I1212 14:12:20.439924 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:12:20 crc kubenswrapper[5108]: I1212 14:12:20.439936 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:12:20 crc kubenswrapper[5108]: I1212 14:12:20.439953 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:12:20 crc kubenswrapper[5108]: I1212 14:12:20.439965 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:12:20Z","lastTransitionTime":"2025-12-12T14:12:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:12:20 crc kubenswrapper[5108]: I1212 14:12:20.543641 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:12:20 crc kubenswrapper[5108]: I1212 14:12:20.543692 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:12:20 crc kubenswrapper[5108]: I1212 14:12:20.543705 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:12:20 crc kubenswrapper[5108]: I1212 14:12:20.543721 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:12:20 crc kubenswrapper[5108]: I1212 14:12:20.543733 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:12:20Z","lastTransitionTime":"2025-12-12T14:12:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:12:20 crc kubenswrapper[5108]: I1212 14:12:20.646598 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:12:20 crc kubenswrapper[5108]: I1212 14:12:20.646646 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:12:20 crc kubenswrapper[5108]: I1212 14:12:20.646659 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:12:20 crc kubenswrapper[5108]: I1212 14:12:20.646674 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:12:20 crc kubenswrapper[5108]: I1212 14:12:20.646686 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:12:20Z","lastTransitionTime":"2025-12-12T14:12:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:12:20 crc kubenswrapper[5108]: I1212 14:12:20.733220 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-ztpws" event={"ID":"1e8c3045-7200-4b39-9531-5ce86ab0b5b5","Type":"ContainerStarted","Data":"ac28c2e3a31b1607275402b0d718319be640cc0e29653600c0bb3bfe498f42ff"} Dec 12 14:12:20 crc kubenswrapper[5108]: I1212 14:12:20.733261 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-ztpws" event={"ID":"1e8c3045-7200-4b39-9531-5ce86ab0b5b5","Type":"ContainerStarted","Data":"b146a8d2cbc44a23872f72bffda5201410fcd3fe6e06a5309c4bc532a01d769e"} Dec 12 14:12:20 crc kubenswrapper[5108]: I1212 14:12:20.736349 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w294k" event={"ID":"fcb30c12-8b29-461d-ab3e-a76577b664d6","Type":"ContainerStarted","Data":"b550b7a8c11bf3ff355d52228d66c08f2b81870d3a9e5942b1d9c8f379810ded"} Dec 12 14:12:20 crc kubenswrapper[5108]: I1212 14:12:20.736402 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w294k" event={"ID":"fcb30c12-8b29-461d-ab3e-a76577b664d6","Type":"ContainerStarted","Data":"59bb44262fa109c767656d9eb9c0c339275e7515ac478a28e10f263d2cb3e961"} Dec 12 14:12:20 crc kubenswrapper[5108]: I1212 14:12:20.736415 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w294k" event={"ID":"fcb30c12-8b29-461d-ab3e-a76577b664d6","Type":"ContainerStarted","Data":"370a60c7bbddc0c9a8c314c4414c782d50a607afd56c025e9308178568018ab1"} Dec 12 14:12:20 crc kubenswrapper[5108]: I1212 14:12:20.738814 5108 generic.go:358] "Generic (PLEG): container finished" podID="65a61526-11c5-4c70-ae85-c126f893efd8" containerID="1d5dbc2645dc95f8b4f0550d628c7760a56ab2d362bb2bd0651da9aa76c15c29" exitCode=0 Dec 12 14:12:20 crc kubenswrapper[5108]: I1212 14:12:20.738983 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ctxlm" event={"ID":"65a61526-11c5-4c70-ae85-c126f893efd8","Type":"ContainerDied","Data":"1d5dbc2645dc95f8b4f0550d628c7760a56ab2d362bb2bd0651da9aa76c15c29"} Dec 12 14:12:20 crc kubenswrapper[5108]: I1212 14:12:20.739180 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ctxlm" event={"ID":"65a61526-11c5-4c70-ae85-c126f893efd8","Type":"ContainerStarted","Data":"4ce3b13eca59530e8707985596778764e71d7018badca9f609c12e4a05b52874"} Dec 12 14:12:20 crc kubenswrapper[5108]: I1212 14:12:20.744006 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-tgfnd" event={"ID":"93177b5a-a677-4e5b-ac8d-cf64e68bd1d9","Type":"ContainerStarted","Data":"a322b7c3d2e40f80468e9520667732211071709df584add693361e1919f673f8"} Dec 12 14:12:20 crc kubenswrapper[5108]: I1212 14:12:20.744051 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-tgfnd" event={"ID":"93177b5a-a677-4e5b-ac8d-cf64e68bd1d9","Type":"ContainerStarted","Data":"1bc7083aeabfed6d359a8922f19f70a5c6ab6c451b23f0f44a8dbce6f79fba49"} Dec 12 14:12:20 crc kubenswrapper[5108]: I1212 14:12:20.749563 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:12:20 crc kubenswrapper[5108]: I1212 14:12:20.749847 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:12:20 crc kubenswrapper[5108]: I1212 14:12:20.750134 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:12:20 crc kubenswrapper[5108]: I1212 14:12:20.750306 5108 generic.go:358] "Generic (PLEG): container finished" podID="934d8f16-46da-4779-8ab8-31b05d1e8b5c" containerID="68c7e7ce42d7d01313b8ae6c15bcab4983632d2398dd1b85bcfa8767a8ee7b30" exitCode=0 Dec 12 14:12:20 crc kubenswrapper[5108]: I1212 14:12:20.750368 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69wzc" event={"ID":"934d8f16-46da-4779-8ab8-31b05d1e8b5c","Type":"ContainerDied","Data":"68c7e7ce42d7d01313b8ae6c15bcab4983632d2398dd1b85bcfa8767a8ee7b30"} Dec 12 14:12:20 crc kubenswrapper[5108]: I1212 14:12:20.750406 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69wzc" event={"ID":"934d8f16-46da-4779-8ab8-31b05d1e8b5c","Type":"ContainerStarted","Data":"b992142de4239b8b21dcd0596986f91d86fd70204d0fbf8258995c96c7f0ca90"} Dec 12 14:12:20 crc kubenswrapper[5108]: I1212 14:12:20.750329 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:12:20 crc kubenswrapper[5108]: I1212 14:12:20.750458 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:12:20Z","lastTransitionTime":"2025-12-12T14:12:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:12:20 crc kubenswrapper[5108]: I1212 14:12:20.752686 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"27e0ea7d0f10d3a3ae31c3d0059222e940c24d073ed2b61bc1a49f5721c8bf85"} Dec 12 14:12:20 crc kubenswrapper[5108]: I1212 14:12:20.752715 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"f5acf66b8a427437cd94c6ce25adc8eb45fbff65c5a07c05ea1a2f07beaead85"} Dec 12 14:12:20 crc kubenswrapper[5108]: I1212 14:12:20.752728 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"8b35af4f8a5d3d7fdfd7b70fb554ab4e3b95b74f1c1866243d06ecb4dc8470c9"} Dec 12 14:12:20 crc kubenswrapper[5108]: I1212 14:12:20.753608 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" event={"ID":"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc","Type":"ContainerStarted","Data":"e3d96383e20cad1dd6dd063c03fb61bcdda49770d8db9b689ae2daeb5d79c46f"} Dec 12 14:12:20 crc kubenswrapper[5108]: I1212 14:12:20.755131 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-9l7sp" event={"ID":"23a662b0-e060-413b-a12c-6ef886171e85","Type":"ContainerStarted","Data":"a40299ab6cae895b98adc917456d6558e6ea6f8ac330065204023e95a60eecac"} Dec 12 14:12:20 crc kubenswrapper[5108]: I1212 14:12:20.755176 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-9l7sp" event={"ID":"23a662b0-e060-413b-a12c-6ef886171e85","Type":"ContainerStarted","Data":"7ccb75d82fce5b6eec1d113eb1bcd91a29968fd8f41b85a0dc0847d2134231ee"} Dec 12 14:12:20 crc kubenswrapper[5108]: I1212 14:12:20.756334 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" event={"ID":"34177974-8d82-49d2-a763-391d0df3bbd8","Type":"ContainerStarted","Data":"82431befbd9c48dc30a04574bf5c5c3b0b1d059710d18574487b60cccd8fad27"} Dec 12 14:12:20 crc kubenswrapper[5108]: I1212 14:12:20.757370 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-wzxz4" event={"ID":"d8603a7b-d127-481c-8901-fff3b6f9f38b","Type":"ContainerStarted","Data":"9fcd25dad249b60e593ec6771cdadb3e233d578960b6b0a5899598511e877b98"} Dec 12 14:12:20 crc kubenswrapper[5108]: I1212 14:12:20.757398 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-wzxz4" event={"ID":"d8603a7b-d127-481c-8901-fff3b6f9f38b","Type":"ContainerStarted","Data":"6d897d8d478213848ac8c364ee045dfc503fb7a8517a27b05fdaa982f98bf324"} Dec 12 14:12:20 crc kubenswrapper[5108]: I1212 14:12:20.757409 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-wzxz4" event={"ID":"d8603a7b-d127-481c-8901-fff3b6f9f38b","Type":"ContainerStarted","Data":"18c74d41b1790e55c6779339ddb0156f97c829114e50a66c9954e0eedd31b330"} Dec 12 14:12:20 crc kubenswrapper[5108]: I1212 14:12:20.848408 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=1.8483910730000002 podStartE2EDuration="1.848391073s" podCreationTimestamp="2025-12-12 14:12:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 14:12:20.848316291 +0000 UTC m=+93.756307470" watchObservedRunningTime="2025-12-12 14:12:20.848391073 +0000 UTC m=+93.756382232" Dec 12 14:12:20 crc kubenswrapper[5108]: I1212 14:12:20.852881 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:12:20 crc kubenswrapper[5108]: I1212 14:12:20.852921 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:12:20 crc kubenswrapper[5108]: I1212 14:12:20.852934 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:12:20 crc kubenswrapper[5108]: I1212 14:12:20.852950 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:12:20 crc kubenswrapper[5108]: I1212 14:12:20.852959 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:12:20Z","lastTransitionTime":"2025-12-12T14:12:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:12:20 crc kubenswrapper[5108]: I1212 14:12:20.935816 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-ztpws" podStartSLOduration=73.935797139 podStartE2EDuration="1m13.935797139s" podCreationTimestamp="2025-12-12 14:11:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 14:12:20.885037056 +0000 UTC m=+93.793028215" watchObservedRunningTime="2025-12-12 14:12:20.935797139 +0000 UTC m=+93.843788298" Dec 12 14:12:20 crc kubenswrapper[5108]: I1212 14:12:20.954779 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=1.954759257 podStartE2EDuration="1.954759257s" podCreationTimestamp="2025-12-12 14:12:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 14:12:20.93622419 +0000 UTC m=+93.844215359" watchObservedRunningTime="2025-12-12 14:12:20.954759257 +0000 UTC m=+93.862750416" Dec 12 14:12:20 crc kubenswrapper[5108]: I1212 14:12:20.954998 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=1.954993113 podStartE2EDuration="1.954993113s" podCreationTimestamp="2025-12-12 14:12:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 14:12:20.954651684 +0000 UTC m=+93.862642853" watchObservedRunningTime="2025-12-12 14:12:20.954993113 +0000 UTC m=+93.862984272" Dec 12 14:12:20 crc kubenswrapper[5108]: I1212 14:12:20.957608 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:12:20 crc kubenswrapper[5108]: I1212 14:12:20.957639 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:12:20 crc kubenswrapper[5108]: I1212 14:12:20.957648 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:12:20 crc kubenswrapper[5108]: I1212 14:12:20.957663 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:12:20 crc kubenswrapper[5108]: I1212 14:12:20.957673 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:12:20Z","lastTransitionTime":"2025-12-12T14:12:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:12:21 crc kubenswrapper[5108]: I1212 14:12:21.062514 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:12:21 crc kubenswrapper[5108]: I1212 14:12:21.062801 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:12:21 crc kubenswrapper[5108]: I1212 14:12:21.062813 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:12:21 crc kubenswrapper[5108]: I1212 14:12:21.062831 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:12:21 crc kubenswrapper[5108]: I1212 14:12:21.062842 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:12:21Z","lastTransitionTime":"2025-12-12T14:12:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:12:21 crc kubenswrapper[5108]: I1212 14:12:21.103281 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 14:12:21 crc kubenswrapper[5108]: I1212 14:12:21.103359 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 14:12:21 crc kubenswrapper[5108]: E1212 14:12:21.103374 5108 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 12 14:12:21 crc kubenswrapper[5108]: I1212 14:12:21.103397 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 14:12:21 crc kubenswrapper[5108]: I1212 14:12:21.103427 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 12 14:12:21 crc kubenswrapper[5108]: E1212 14:12:21.103447 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-12 14:12:23.103430967 +0000 UTC m=+96.011422126 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 12 14:12:21 crc kubenswrapper[5108]: E1212 14:12:21.103559 5108 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 12 14:12:21 crc kubenswrapper[5108]: E1212 14:12:21.103586 5108 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 12 14:12:21 crc kubenswrapper[5108]: E1212 14:12:21.103599 5108 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 14:12:21 crc kubenswrapper[5108]: E1212 14:12:21.103665 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2025-12-12 14:12:23.103647773 +0000 UTC m=+96.011639002 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 14:12:21 crc kubenswrapper[5108]: E1212 14:12:21.103727 5108 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 12 14:12:21 crc kubenswrapper[5108]: E1212 14:12:21.103741 5108 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 12 14:12:21 crc kubenswrapper[5108]: E1212 14:12:21.103750 5108 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 14:12:21 crc kubenswrapper[5108]: E1212 14:12:21.103778 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2025-12-12 14:12:23.103769447 +0000 UTC m=+96.011760606 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 14:12:21 crc kubenswrapper[5108]: E1212 14:12:21.103798 5108 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 12 14:12:21 crc kubenswrapper[5108]: E1212 14:12:21.103832 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-12 14:12:23.103824318 +0000 UTC m=+96.011815477 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 12 14:12:21 crc kubenswrapper[5108]: I1212 14:12:21.123608 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=2.123590718 podStartE2EDuration="2.123590718s" podCreationTimestamp="2025-12-12 14:12:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 14:12:21.103128799 +0000 UTC m=+94.011119958" watchObservedRunningTime="2025-12-12 14:12:21.123590718 +0000 UTC m=+94.031581877" Dec 12 14:12:21 crc kubenswrapper[5108]: I1212 14:12:21.165799 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:12:21 crc kubenswrapper[5108]: I1212 14:12:21.165855 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:12:21 crc kubenswrapper[5108]: I1212 14:12:21.165865 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:12:21 crc kubenswrapper[5108]: I1212 14:12:21.165880 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:12:21 crc kubenswrapper[5108]: I1212 14:12:21.165890 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:12:21Z","lastTransitionTime":"2025-12-12T14:12:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:12:21 crc kubenswrapper[5108]: I1212 14:12:21.172347 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-wzxz4" podStartSLOduration=74.172328636 podStartE2EDuration="1m14.172328636s" podCreationTimestamp="2025-12-12 14:11:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 14:12:21.171687949 +0000 UTC m=+94.079679138" watchObservedRunningTime="2025-12-12 14:12:21.172328636 +0000 UTC m=+94.080319795" Dec 12 14:12:21 crc kubenswrapper[5108]: I1212 14:12:21.266195 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-w294k" podStartSLOduration=74.266172575 podStartE2EDuration="1m14.266172575s" podCreationTimestamp="2025-12-12 14:11:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 14:12:21.241858661 +0000 UTC m=+94.149849840" watchObservedRunningTime="2025-12-12 14:12:21.266172575 +0000 UTC m=+94.174163744" Dec 12 14:12:21 crc kubenswrapper[5108]: I1212 14:12:21.267245 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:12:21 crc kubenswrapper[5108]: I1212 14:12:21.267304 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:12:21 crc kubenswrapper[5108]: I1212 14:12:21.267318 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:12:21 crc kubenswrapper[5108]: I1212 14:12:21.267336 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:12:21 crc kubenswrapper[5108]: I1212 14:12:21.267351 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:12:21Z","lastTransitionTime":"2025-12-12T14:12:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:12:21 crc kubenswrapper[5108]: I1212 14:12:21.296075 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-9l7sp" podStartSLOduration=74.296051956 podStartE2EDuration="1m14.296051956s" podCreationTimestamp="2025-12-12 14:11:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 14:12:21.296031065 +0000 UTC m=+94.204022234" watchObservedRunningTime="2025-12-12 14:12:21.296051956 +0000 UTC m=+94.204043115" Dec 12 14:12:21 crc kubenswrapper[5108]: I1212 14:12:21.312805 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-tgfnd" podStartSLOduration=74.312782245 podStartE2EDuration="1m14.312782245s" podCreationTimestamp="2025-12-12 14:11:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 14:12:21.312734824 +0000 UTC m=+94.220726003" watchObservedRunningTime="2025-12-12 14:12:21.312782245 +0000 UTC m=+94.220773404" Dec 12 14:12:21 crc kubenswrapper[5108]: I1212 14:12:21.369265 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:12:21 crc kubenswrapper[5108]: I1212 14:12:21.369316 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:12:21 crc kubenswrapper[5108]: I1212 14:12:21.369347 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:12:21 crc kubenswrapper[5108]: I1212 14:12:21.369366 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:12:21 crc kubenswrapper[5108]: I1212 14:12:21.369379 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:12:21Z","lastTransitionTime":"2025-12-12T14:12:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:12:21 crc kubenswrapper[5108]: I1212 14:12:21.407420 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 12 14:12:21 crc kubenswrapper[5108]: I1212 14:12:21.407520 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:21 crc kubenswrapper[5108]: E1212 14:12:21.407542 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 12 14:12:21 crc kubenswrapper[5108]: I1212 14:12:21.407664 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d8c95a75-0c3b-4caa-9b09-30c6dca73e72-metrics-certs\") pod \"network-metrics-daemon-p4g92\" (UID: \"d8c95a75-0c3b-4caa-9b09-30c6dca73e72\") " pod="openshift-multus/network-metrics-daemon-p4g92" Dec 12 14:12:21 crc kubenswrapper[5108]: E1212 14:12:21.407793 5108 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 12 14:12:21 crc kubenswrapper[5108]: E1212 14:12:21.407851 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d8c95a75-0c3b-4caa-9b09-30c6dca73e72-metrics-certs podName:d8c95a75-0c3b-4caa-9b09-30c6dca73e72 nodeName:}" failed. No retries permitted until 2025-12-12 14:12:23.407835936 +0000 UTC m=+96.315827095 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d8c95a75-0c3b-4caa-9b09-30c6dca73e72-metrics-certs") pod "network-metrics-daemon-p4g92" (UID: "d8c95a75-0c3b-4caa-9b09-30c6dca73e72") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 12 14:12:21 crc kubenswrapper[5108]: E1212 14:12:21.407916 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:23.407907488 +0000 UTC m=+96.315898647 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:21 crc kubenswrapper[5108]: I1212 14:12:21.407925 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p4g92" Dec 12 14:12:21 crc kubenswrapper[5108]: E1212 14:12:21.407981 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p4g92" podUID="d8c95a75-0c3b-4caa-9b09-30c6dca73e72" Dec 12 14:12:21 crc kubenswrapper[5108]: I1212 14:12:21.412182 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 14:12:21 crc kubenswrapper[5108]: E1212 14:12:21.412326 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 12 14:12:21 crc kubenswrapper[5108]: I1212 14:12:21.412364 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 14:12:21 crc kubenswrapper[5108]: E1212 14:12:21.412437 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 12 14:12:21 crc kubenswrapper[5108]: I1212 14:12:21.417394 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01080b46-74f1-4191-8755-5152a57b3b25" path="/var/lib/kubelet/pods/01080b46-74f1-4191-8755-5152a57b3b25/volumes" Dec 12 14:12:21 crc kubenswrapper[5108]: I1212 14:12:21.418374 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09cfa50b-4138-4585-a53e-64dd3ab73335" path="/var/lib/kubelet/pods/09cfa50b-4138-4585-a53e-64dd3ab73335/volumes" Dec 12 14:12:21 crc kubenswrapper[5108]: I1212 14:12:21.419871 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" path="/var/lib/kubelet/pods/0dd0fbac-8c0d-4228-8faa-abbeedabf7db/volumes" Dec 12 14:12:21 crc kubenswrapper[5108]: I1212 14:12:21.421160 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0effdbcf-dd7d-404d-9d48-77536d665a5d" path="/var/lib/kubelet/pods/0effdbcf-dd7d-404d-9d48-77536d665a5d/volumes" Dec 12 14:12:21 crc kubenswrapper[5108]: I1212 14:12:21.423307 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="149b3c48-e17c-4a66-a835-d86dabf6ff13" path="/var/lib/kubelet/pods/149b3c48-e17c-4a66-a835-d86dabf6ff13/volumes" Dec 12 14:12:21 crc kubenswrapper[5108]: I1212 14:12:21.424717 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16bdd140-dce1-464c-ab47-dd5798d1d256" path="/var/lib/kubelet/pods/16bdd140-dce1-464c-ab47-dd5798d1d256/volumes" Dec 12 14:12:21 crc kubenswrapper[5108]: I1212 14:12:21.426059 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18f80adb-c1c3-49ba-8ee4-932c851d3897" path="/var/lib/kubelet/pods/18f80adb-c1c3-49ba-8ee4-932c851d3897/volumes" Dec 12 14:12:21 crc kubenswrapper[5108]: I1212 14:12:21.427668 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" path="/var/lib/kubelet/pods/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e/volumes" Dec 12 14:12:21 crc kubenswrapper[5108]: I1212 14:12:21.428479 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2325ffef-9d5b-447f-b00e-3efc429acefe" path="/var/lib/kubelet/pods/2325ffef-9d5b-447f-b00e-3efc429acefe/volumes" Dec 12 14:12:21 crc kubenswrapper[5108]: I1212 14:12:21.430233 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="301e1965-1754-483d-b6cc-bfae7038bbca" path="/var/lib/kubelet/pods/301e1965-1754-483d-b6cc-bfae7038bbca/volumes" Dec 12 14:12:21 crc kubenswrapper[5108]: I1212 14:12:21.431239 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31fa8943-81cc-4750-a0b7-0fa9ab5af883" path="/var/lib/kubelet/pods/31fa8943-81cc-4750-a0b7-0fa9ab5af883/volumes" Dec 12 14:12:21 crc kubenswrapper[5108]: I1212 14:12:21.433004 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42a11a02-47e1-488f-b270-2679d3298b0e" path="/var/lib/kubelet/pods/42a11a02-47e1-488f-b270-2679d3298b0e/volumes" Dec 12 14:12:21 crc kubenswrapper[5108]: I1212 14:12:21.434011 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="567683bd-0efc-4f21-b076-e28559628404" path="/var/lib/kubelet/pods/567683bd-0efc-4f21-b076-e28559628404/volumes" Dec 12 14:12:21 crc kubenswrapper[5108]: I1212 14:12:21.435713 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="584e1f4a-8205-47d7-8efb-3afc6017c4c9" path="/var/lib/kubelet/pods/584e1f4a-8205-47d7-8efb-3afc6017c4c9/volumes" Dec 12 14:12:21 crc kubenswrapper[5108]: I1212 14:12:21.436239 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="593a3561-7760-45c5-8f91-5aaef7475d0f" path="/var/lib/kubelet/pods/593a3561-7760-45c5-8f91-5aaef7475d0f/volumes" Dec 12 14:12:21 crc kubenswrapper[5108]: I1212 14:12:21.436902 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ebfebf6-3ecd-458e-943f-bb25b52e2718" path="/var/lib/kubelet/pods/5ebfebf6-3ecd-458e-943f-bb25b52e2718/volumes" Dec 12 14:12:21 crc kubenswrapper[5108]: I1212 14:12:21.438063 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6077b63e-53a2-4f96-9d56-1ce0324e4913" path="/var/lib/kubelet/pods/6077b63e-53a2-4f96-9d56-1ce0324e4913/volumes" Dec 12 14:12:21 crc kubenswrapper[5108]: I1212 14:12:21.439172 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" path="/var/lib/kubelet/pods/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca/volumes" Dec 12 14:12:21 crc kubenswrapper[5108]: I1212 14:12:21.440383 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6edfcf45-925b-4eff-b940-95b6fc0b85d4" path="/var/lib/kubelet/pods/6edfcf45-925b-4eff-b940-95b6fc0b85d4/volumes" Dec 12 14:12:21 crc kubenswrapper[5108]: I1212 14:12:21.441246 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ee8fbd3-1f81-4666-96da-5afc70819f1a" path="/var/lib/kubelet/pods/6ee8fbd3-1f81-4666-96da-5afc70819f1a/volumes" Dec 12 14:12:21 crc kubenswrapper[5108]: I1212 14:12:21.442067 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" path="/var/lib/kubelet/pods/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a/volumes" Dec 12 14:12:21 crc kubenswrapper[5108]: I1212 14:12:21.443795 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="736c54fe-349c-4bb9-870a-d1c1d1c03831" path="/var/lib/kubelet/pods/736c54fe-349c-4bb9-870a-d1c1d1c03831/volumes" Dec 12 14:12:21 crc kubenswrapper[5108]: I1212 14:12:21.444898 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7599e0b6-bddf-4def-b7f2-0b32206e8651" path="/var/lib/kubelet/pods/7599e0b6-bddf-4def-b7f2-0b32206e8651/volumes" Dec 12 14:12:21 crc kubenswrapper[5108]: I1212 14:12:21.446855 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7afa918d-be67-40a6-803c-d3b0ae99d815" path="/var/lib/kubelet/pods/7afa918d-be67-40a6-803c-d3b0ae99d815/volumes" Dec 12 14:12:21 crc kubenswrapper[5108]: I1212 14:12:21.449424 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7df94c10-441d-4386-93a6-6730fb7bcde0" path="/var/lib/kubelet/pods/7df94c10-441d-4386-93a6-6730fb7bcde0/volumes" Dec 12 14:12:21 crc kubenswrapper[5108]: I1212 14:12:21.451460 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" path="/var/lib/kubelet/pods/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a/volumes" Dec 12 14:12:21 crc kubenswrapper[5108]: I1212 14:12:21.453901 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81e39f7b-62e4-4fc9-992a-6535ce127a02" path="/var/lib/kubelet/pods/81e39f7b-62e4-4fc9-992a-6535ce127a02/volumes" Dec 12 14:12:21 crc kubenswrapper[5108]: I1212 14:12:21.455225 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="869851b9-7ffb-4af0-b166-1d8aa40a5f80" path="/var/lib/kubelet/pods/869851b9-7ffb-4af0-b166-1d8aa40a5f80/volumes" Dec 12 14:12:21 crc kubenswrapper[5108]: I1212 14:12:21.458476 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" path="/var/lib/kubelet/pods/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff/volumes" Dec 12 14:12:21 crc kubenswrapper[5108]: I1212 14:12:21.460046 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92dfbade-90b6-4169-8c07-72cff7f2c82b" path="/var/lib/kubelet/pods/92dfbade-90b6-4169-8c07-72cff7f2c82b/volumes" Dec 12 14:12:21 crc kubenswrapper[5108]: I1212 14:12:21.461912 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94a6e063-3d1a-4d44-875d-185291448c31" path="/var/lib/kubelet/pods/94a6e063-3d1a-4d44-875d-185291448c31/volumes" Dec 12 14:12:21 crc kubenswrapper[5108]: I1212 14:12:21.464113 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f71a554-e414-4bc3-96d2-674060397afe" path="/var/lib/kubelet/pods/9f71a554-e414-4bc3-96d2-674060397afe/volumes" Dec 12 14:12:21 crc kubenswrapper[5108]: I1212 14:12:21.465978 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a208c9c2-333b-4b4a-be0d-bc32ec38a821" path="/var/lib/kubelet/pods/a208c9c2-333b-4b4a-be0d-bc32ec38a821/volumes" Dec 12 14:12:21 crc kubenswrapper[5108]: I1212 14:12:21.467674 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" path="/var/lib/kubelet/pods/a52afe44-fb37-46ed-a1f8-bf39727a3cbe/volumes" Dec 12 14:12:21 crc kubenswrapper[5108]: I1212 14:12:21.468647 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a555ff2e-0be6-46d5-897d-863bb92ae2b3" path="/var/lib/kubelet/pods/a555ff2e-0be6-46d5-897d-863bb92ae2b3/volumes" Dec 12 14:12:21 crc kubenswrapper[5108]: I1212 14:12:21.469995 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7a88189-c967-4640-879e-27665747f20c" path="/var/lib/kubelet/pods/a7a88189-c967-4640-879e-27665747f20c/volumes" Dec 12 14:12:21 crc kubenswrapper[5108]: I1212 14:12:21.470900 5108 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="af33e427-6803-48c2-a76a-dd9deb7cbf9a" path="/var/lib/kubelet/pods/af33e427-6803-48c2-a76a-dd9deb7cbf9a/volume-subpaths/run-systemd/ovnkube-controller/6" Dec 12 14:12:21 crc kubenswrapper[5108]: I1212 14:12:21.471020 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af33e427-6803-48c2-a76a-dd9deb7cbf9a" path="/var/lib/kubelet/pods/af33e427-6803-48c2-a76a-dd9deb7cbf9a/volumes" Dec 12 14:12:21 crc kubenswrapper[5108]: I1212 14:12:21.471057 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:12:21 crc kubenswrapper[5108]: I1212 14:12:21.471129 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:12:21 crc kubenswrapper[5108]: I1212 14:12:21.471146 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:12:21 crc kubenswrapper[5108]: I1212 14:12:21.471164 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:12:21 crc kubenswrapper[5108]: I1212 14:12:21.471179 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:12:21Z","lastTransitionTime":"2025-12-12T14:12:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:12:21 crc kubenswrapper[5108]: I1212 14:12:21.477115 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af41de71-79cf-4590-bbe9-9e8b848862cb" path="/var/lib/kubelet/pods/af41de71-79cf-4590-bbe9-9e8b848862cb/volumes" Dec 12 14:12:21 crc kubenswrapper[5108]: I1212 14:12:21.478877 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" path="/var/lib/kubelet/pods/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a/volumes" Dec 12 14:12:21 crc kubenswrapper[5108]: I1212 14:12:21.479923 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4750666-1362-4001-abd0-6f89964cc621" path="/var/lib/kubelet/pods/b4750666-1362-4001-abd0-6f89964cc621/volumes" Dec 12 14:12:21 crc kubenswrapper[5108]: I1212 14:12:21.482804 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b605f283-6f2e-42da-a838-54421690f7d0" path="/var/lib/kubelet/pods/b605f283-6f2e-42da-a838-54421690f7d0/volumes" Dec 12 14:12:21 crc kubenswrapper[5108]: I1212 14:12:21.483535 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c491984c-7d4b-44aa-8c1e-d7974424fa47" path="/var/lib/kubelet/pods/c491984c-7d4b-44aa-8c1e-d7974424fa47/volumes" Dec 12 14:12:21 crc kubenswrapper[5108]: I1212 14:12:21.490318 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5f2bfad-70f6-4185-a3d9-81ce12720767" path="/var/lib/kubelet/pods/c5f2bfad-70f6-4185-a3d9-81ce12720767/volumes" Dec 12 14:12:21 crc kubenswrapper[5108]: I1212 14:12:21.491564 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc85e424-18b2-4924-920b-bd291a8c4b01" path="/var/lib/kubelet/pods/cc85e424-18b2-4924-920b-bd291a8c4b01/volumes" Dec 12 14:12:21 crc kubenswrapper[5108]: I1212 14:12:21.493910 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce090a97-9ab6-4c40-a719-64ff2acd9778" path="/var/lib/kubelet/pods/ce090a97-9ab6-4c40-a719-64ff2acd9778/volumes" Dec 12 14:12:21 crc kubenswrapper[5108]: I1212 14:12:21.498071 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d19cb085-0c5b-4810-b654-ce7923221d90" path="/var/lib/kubelet/pods/d19cb085-0c5b-4810-b654-ce7923221d90/volumes" Dec 12 14:12:21 crc kubenswrapper[5108]: I1212 14:12:21.499594 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" path="/var/lib/kubelet/pods/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7/volumes" Dec 12 14:12:21 crc kubenswrapper[5108]: I1212 14:12:21.504522 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d565531a-ff86-4608-9d19-767de01ac31b" path="/var/lib/kubelet/pods/d565531a-ff86-4608-9d19-767de01ac31b/volumes" Dec 12 14:12:21 crc kubenswrapper[5108]: I1212 14:12:21.505271 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7e8f42f-dc0e-424b-bb56-5ec849834888" path="/var/lib/kubelet/pods/d7e8f42f-dc0e-424b-bb56-5ec849834888/volumes" Dec 12 14:12:21 crc kubenswrapper[5108]: I1212 14:12:21.506479 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" path="/var/lib/kubelet/pods/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9/volumes" Dec 12 14:12:21 crc kubenswrapper[5108]: I1212 14:12:21.507374 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e093be35-bb62-4843-b2e8-094545761610" path="/var/lib/kubelet/pods/e093be35-bb62-4843-b2e8-094545761610/volumes" Dec 12 14:12:21 crc kubenswrapper[5108]: I1212 14:12:21.509715 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" path="/var/lib/kubelet/pods/e1d2a42d-af1d-4054-9618-ab545e0ed8b7/volumes" Dec 12 14:12:21 crc kubenswrapper[5108]: I1212 14:12:21.513020 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f559dfa3-3917-43a2-97f6-61ddfda10e93" path="/var/lib/kubelet/pods/f559dfa3-3917-43a2-97f6-61ddfda10e93/volumes" Dec 12 14:12:21 crc kubenswrapper[5108]: I1212 14:12:21.519792 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f65c0ac1-8bca-454d-a2e6-e35cb418beac" path="/var/lib/kubelet/pods/f65c0ac1-8bca-454d-a2e6-e35cb418beac/volumes" Dec 12 14:12:21 crc kubenswrapper[5108]: I1212 14:12:21.521051 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" path="/var/lib/kubelet/pods/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4/volumes" Dec 12 14:12:21 crc kubenswrapper[5108]: I1212 14:12:21.523347 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7e2c886-118e-43bb-bef1-c78134de392b" path="/var/lib/kubelet/pods/f7e2c886-118e-43bb-bef1-c78134de392b/volumes" Dec 12 14:12:21 crc kubenswrapper[5108]: I1212 14:12:21.524483 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" path="/var/lib/kubelet/pods/fc8db2c7-859d-47b3-a900-2bd0c0b2973b/volumes" Dec 12 14:12:21 crc kubenswrapper[5108]: I1212 14:12:21.573965 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:12:21 crc kubenswrapper[5108]: I1212 14:12:21.574952 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:12:21 crc kubenswrapper[5108]: I1212 14:12:21.575024 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:12:21 crc kubenswrapper[5108]: I1212 14:12:21.575103 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:12:21 crc kubenswrapper[5108]: I1212 14:12:21.575194 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:12:21Z","lastTransitionTime":"2025-12-12T14:12:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:12:21 crc kubenswrapper[5108]: I1212 14:12:21.679888 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:12:21 crc kubenswrapper[5108]: I1212 14:12:21.680276 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:12:21 crc kubenswrapper[5108]: I1212 14:12:21.680289 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:12:21 crc kubenswrapper[5108]: I1212 14:12:21.680305 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:12:21 crc kubenswrapper[5108]: I1212 14:12:21.680317 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:12:21Z","lastTransitionTime":"2025-12-12T14:12:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:12:21 crc kubenswrapper[5108]: I1212 14:12:21.767566 5108 generic.go:358] "Generic (PLEG): container finished" podID="65a61526-11c5-4c70-ae85-c126f893efd8" containerID="226a45118b65ef2579a6c3c08f6aa526dd31c36f04b516d6311eeecdcc7fcef6" exitCode=0 Dec 12 14:12:21 crc kubenswrapper[5108]: I1212 14:12:21.767784 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ctxlm" event={"ID":"65a61526-11c5-4c70-ae85-c126f893efd8","Type":"ContainerDied","Data":"226a45118b65ef2579a6c3c08f6aa526dd31c36f04b516d6311eeecdcc7fcef6"} Dec 12 14:12:21 crc kubenswrapper[5108]: I1212 14:12:21.779465 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69wzc" event={"ID":"934d8f16-46da-4779-8ab8-31b05d1e8b5c","Type":"ContainerStarted","Data":"a1f681c1c61bf023f01cbca01e489ba9853462e7471cc85cc24e1b5da86096ea"} Dec 12 14:12:21 crc kubenswrapper[5108]: I1212 14:12:21.779510 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69wzc" event={"ID":"934d8f16-46da-4779-8ab8-31b05d1e8b5c","Type":"ContainerStarted","Data":"9919b79275f59aff26b0acffc3954a149d74c9173a5c44d77512934a99cadd03"} Dec 12 14:12:21 crc kubenswrapper[5108]: I1212 14:12:21.779523 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69wzc" event={"ID":"934d8f16-46da-4779-8ab8-31b05d1e8b5c","Type":"ContainerStarted","Data":"9da4bf297887a716ed638824bbce5aca0592ab7354dff37269b576a4154f6b66"} Dec 12 14:12:21 crc kubenswrapper[5108]: I1212 14:12:21.782422 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:12:21 crc kubenswrapper[5108]: I1212 14:12:21.782466 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:12:21 crc kubenswrapper[5108]: I1212 14:12:21.782478 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:12:21 crc kubenswrapper[5108]: I1212 14:12:21.782494 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:12:21 crc kubenswrapper[5108]: I1212 14:12:21.782506 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:12:21Z","lastTransitionTime":"2025-12-12T14:12:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:12:21 crc kubenswrapper[5108]: I1212 14:12:21.884361 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:12:21 crc kubenswrapper[5108]: I1212 14:12:21.884406 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:12:21 crc kubenswrapper[5108]: I1212 14:12:21.884418 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:12:21 crc kubenswrapper[5108]: I1212 14:12:21.884434 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:12:21 crc kubenswrapper[5108]: I1212 14:12:21.884448 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:12:21Z","lastTransitionTime":"2025-12-12T14:12:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:12:21 crc kubenswrapper[5108]: I1212 14:12:21.987574 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:12:21 crc kubenswrapper[5108]: I1212 14:12:21.987623 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:12:21 crc kubenswrapper[5108]: I1212 14:12:21.987636 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:12:21 crc kubenswrapper[5108]: I1212 14:12:21.987652 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:12:21 crc kubenswrapper[5108]: I1212 14:12:21.987719 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:12:21Z","lastTransitionTime":"2025-12-12T14:12:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:12:22 crc kubenswrapper[5108]: I1212 14:12:22.092183 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:12:22 crc kubenswrapper[5108]: I1212 14:12:22.092255 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:12:22 crc kubenswrapper[5108]: I1212 14:12:22.092266 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:12:22 crc kubenswrapper[5108]: I1212 14:12:22.092282 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:12:22 crc kubenswrapper[5108]: I1212 14:12:22.092290 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:12:22Z","lastTransitionTime":"2025-12-12T14:12:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:12:22 crc kubenswrapper[5108]: I1212 14:12:22.194292 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:12:22 crc kubenswrapper[5108]: I1212 14:12:22.194341 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:12:22 crc kubenswrapper[5108]: I1212 14:12:22.194352 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:12:22 crc kubenswrapper[5108]: I1212 14:12:22.194367 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:12:22 crc kubenswrapper[5108]: I1212 14:12:22.194378 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:12:22Z","lastTransitionTime":"2025-12-12T14:12:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:12:22 crc kubenswrapper[5108]: I1212 14:12:22.296664 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:12:22 crc kubenswrapper[5108]: I1212 14:12:22.296718 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:12:22 crc kubenswrapper[5108]: I1212 14:12:22.296730 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:12:22 crc kubenswrapper[5108]: I1212 14:12:22.296744 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:12:22 crc kubenswrapper[5108]: I1212 14:12:22.296753 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:12:22Z","lastTransitionTime":"2025-12-12T14:12:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:12:22 crc kubenswrapper[5108]: I1212 14:12:22.347845 5108 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Dec 12 14:12:22 crc kubenswrapper[5108]: I1212 14:12:22.398704 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:12:22 crc kubenswrapper[5108]: I1212 14:12:22.398754 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:12:22 crc kubenswrapper[5108]: I1212 14:12:22.398766 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:12:22 crc kubenswrapper[5108]: I1212 14:12:22.398782 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:12:22 crc kubenswrapper[5108]: I1212 14:12:22.398798 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:12:22Z","lastTransitionTime":"2025-12-12T14:12:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:12:22 crc kubenswrapper[5108]: I1212 14:12:22.501032 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:12:22 crc kubenswrapper[5108]: I1212 14:12:22.501132 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:12:22 crc kubenswrapper[5108]: I1212 14:12:22.501158 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:12:22 crc kubenswrapper[5108]: I1212 14:12:22.501191 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:12:22 crc kubenswrapper[5108]: I1212 14:12:22.501219 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:12:22Z","lastTransitionTime":"2025-12-12T14:12:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:12:22 crc kubenswrapper[5108]: I1212 14:12:22.603393 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:12:22 crc kubenswrapper[5108]: I1212 14:12:22.603456 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:12:22 crc kubenswrapper[5108]: I1212 14:12:22.603471 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:12:22 crc kubenswrapper[5108]: I1212 14:12:22.603489 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:12:22 crc kubenswrapper[5108]: I1212 14:12:22.603501 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:12:22Z","lastTransitionTime":"2025-12-12T14:12:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:12:22 crc kubenswrapper[5108]: I1212 14:12:22.706021 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:12:22 crc kubenswrapper[5108]: I1212 14:12:22.706071 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:12:22 crc kubenswrapper[5108]: I1212 14:12:22.706102 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:12:22 crc kubenswrapper[5108]: I1212 14:12:22.706128 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:12:22 crc kubenswrapper[5108]: I1212 14:12:22.706138 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:12:22Z","lastTransitionTime":"2025-12-12T14:12:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:12:22 crc kubenswrapper[5108]: I1212 14:12:22.786549 5108 generic.go:358] "Generic (PLEG): container finished" podID="65a61526-11c5-4c70-ae85-c126f893efd8" containerID="f3c19259ea0ec769c36b36c1b6f43bd72d73fcd5daae2dd5ab041358d8f039f0" exitCode=0 Dec 12 14:12:22 crc kubenswrapper[5108]: I1212 14:12:22.786660 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ctxlm" event={"ID":"65a61526-11c5-4c70-ae85-c126f893efd8","Type":"ContainerDied","Data":"f3c19259ea0ec769c36b36c1b6f43bd72d73fcd5daae2dd5ab041358d8f039f0"} Dec 12 14:12:22 crc kubenswrapper[5108]: I1212 14:12:22.799048 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69wzc" event={"ID":"934d8f16-46da-4779-8ab8-31b05d1e8b5c","Type":"ContainerStarted","Data":"5615ed6026dc7cc3d5c646cc273ee282bf8790a71ae4a50ea8a8067550bf067f"} Dec 12 14:12:22 crc kubenswrapper[5108]: I1212 14:12:22.799139 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69wzc" event={"ID":"934d8f16-46da-4779-8ab8-31b05d1e8b5c","Type":"ContainerStarted","Data":"f941cb5e5a8e0f562bf1274b00288a3e58fe27459711c3e231201377c4cb7a10"} Dec 12 14:12:22 crc kubenswrapper[5108]: I1212 14:12:22.799199 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69wzc" event={"ID":"934d8f16-46da-4779-8ab8-31b05d1e8b5c","Type":"ContainerStarted","Data":"bcfb8a5acb80dea15b10468780de99a6fb687ef49e693d7fb552ed187b78607b"} Dec 12 14:12:22 crc kubenswrapper[5108]: I1212 14:12:22.807904 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:12:22 crc kubenswrapper[5108]: I1212 14:12:22.807941 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:12:22 crc kubenswrapper[5108]: I1212 14:12:22.807953 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:12:22 crc kubenswrapper[5108]: I1212 14:12:22.807968 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:12:22 crc kubenswrapper[5108]: I1212 14:12:22.807979 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:12:22Z","lastTransitionTime":"2025-12-12T14:12:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:12:22 crc kubenswrapper[5108]: I1212 14:12:22.911826 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:12:22 crc kubenswrapper[5108]: I1212 14:12:22.912273 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:12:22 crc kubenswrapper[5108]: I1212 14:12:22.912289 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:12:22 crc kubenswrapper[5108]: I1212 14:12:22.912307 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:12:22 crc kubenswrapper[5108]: I1212 14:12:22.912320 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:12:22Z","lastTransitionTime":"2025-12-12T14:12:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:12:23 crc kubenswrapper[5108]: I1212 14:12:23.015477 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:12:23 crc kubenswrapper[5108]: I1212 14:12:23.015528 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:12:23 crc kubenswrapper[5108]: I1212 14:12:23.015539 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:12:23 crc kubenswrapper[5108]: I1212 14:12:23.015556 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:12:23 crc kubenswrapper[5108]: I1212 14:12:23.015567 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:12:23Z","lastTransitionTime":"2025-12-12T14:12:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:12:23 crc kubenswrapper[5108]: I1212 14:12:23.119033 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:12:23 crc kubenswrapper[5108]: I1212 14:12:23.119103 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:12:23 crc kubenswrapper[5108]: I1212 14:12:23.119117 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:12:23 crc kubenswrapper[5108]: I1212 14:12:23.119133 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:12:23 crc kubenswrapper[5108]: I1212 14:12:23.119145 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:12:23Z","lastTransitionTime":"2025-12-12T14:12:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:12:23 crc kubenswrapper[5108]: I1212 14:12:23.126457 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 14:12:23 crc kubenswrapper[5108]: I1212 14:12:23.126509 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 14:12:23 crc kubenswrapper[5108]: I1212 14:12:23.126530 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 12 14:12:23 crc kubenswrapper[5108]: E1212 14:12:23.126704 5108 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 12 14:12:23 crc kubenswrapper[5108]: E1212 14:12:23.126729 5108 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 12 14:12:23 crc kubenswrapper[5108]: E1212 14:12:23.126742 5108 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 14:12:23 crc kubenswrapper[5108]: E1212 14:12:23.126799 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2025-12-12 14:12:27.126783564 +0000 UTC m=+100.034774723 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 14:12:23 crc kubenswrapper[5108]: I1212 14:12:23.126791 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 14:12:23 crc kubenswrapper[5108]: E1212 14:12:23.126877 5108 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 12 14:12:23 crc kubenswrapper[5108]: E1212 14:12:23.126894 5108 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 12 14:12:23 crc kubenswrapper[5108]: E1212 14:12:23.126903 5108 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 14:12:23 crc kubenswrapper[5108]: E1212 14:12:23.126936 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2025-12-12 14:12:27.126927207 +0000 UTC m=+100.034918416 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 14:12:23 crc kubenswrapper[5108]: E1212 14:12:23.126984 5108 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 12 14:12:23 crc kubenswrapper[5108]: E1212 14:12:23.126996 5108 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 12 14:12:23 crc kubenswrapper[5108]: E1212 14:12:23.127018 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-12 14:12:27.12701162 +0000 UTC m=+100.035002849 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 12 14:12:23 crc kubenswrapper[5108]: E1212 14:12:23.127070 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-12 14:12:27.127049471 +0000 UTC m=+100.035040670 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 12 14:12:23 crc kubenswrapper[5108]: I1212 14:12:23.221487 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:12:23 crc kubenswrapper[5108]: I1212 14:12:23.221531 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:12:23 crc kubenswrapper[5108]: I1212 14:12:23.221543 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:12:23 crc kubenswrapper[5108]: I1212 14:12:23.221561 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:12:23 crc kubenswrapper[5108]: I1212 14:12:23.221572 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:12:23Z","lastTransitionTime":"2025-12-12T14:12:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:12:23 crc kubenswrapper[5108]: I1212 14:12:23.324025 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:12:23 crc kubenswrapper[5108]: I1212 14:12:23.324070 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:12:23 crc kubenswrapper[5108]: I1212 14:12:23.324106 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:12:23 crc kubenswrapper[5108]: I1212 14:12:23.324122 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:12:23 crc kubenswrapper[5108]: I1212 14:12:23.324134 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:12:23Z","lastTransitionTime":"2025-12-12T14:12:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:12:23 crc kubenswrapper[5108]: I1212 14:12:23.421580 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 14:12:23 crc kubenswrapper[5108]: E1212 14:12:23.421736 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 12 14:12:23 crc kubenswrapper[5108]: I1212 14:12:23.421761 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 14:12:23 crc kubenswrapper[5108]: I1212 14:12:23.421790 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 12 14:12:23 crc kubenswrapper[5108]: I1212 14:12:23.421995 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p4g92" Dec 12 14:12:23 crc kubenswrapper[5108]: E1212 14:12:23.422013 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 12 14:12:23 crc kubenswrapper[5108]: E1212 14:12:23.422283 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 12 14:12:23 crc kubenswrapper[5108]: E1212 14:12:23.422374 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p4g92" podUID="d8c95a75-0c3b-4caa-9b09-30c6dca73e72" Dec 12 14:12:23 crc kubenswrapper[5108]: I1212 14:12:23.426589 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:12:23 crc kubenswrapper[5108]: I1212 14:12:23.426657 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:12:23 crc kubenswrapper[5108]: I1212 14:12:23.426684 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:12:23 crc kubenswrapper[5108]: I1212 14:12:23.426714 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:12:23 crc kubenswrapper[5108]: I1212 14:12:23.426735 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:12:23Z","lastTransitionTime":"2025-12-12T14:12:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:12:23 crc kubenswrapper[5108]: I1212 14:12:23.430125 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:23 crc kubenswrapper[5108]: E1212 14:12:23.430421 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:27.43038163 +0000 UTC m=+100.338372829 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:23 crc kubenswrapper[5108]: I1212 14:12:23.430572 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d8c95a75-0c3b-4caa-9b09-30c6dca73e72-metrics-certs\") pod \"network-metrics-daemon-p4g92\" (UID: \"d8c95a75-0c3b-4caa-9b09-30c6dca73e72\") " pod="openshift-multus/network-metrics-daemon-p4g92" Dec 12 14:12:23 crc kubenswrapper[5108]: E1212 14:12:23.430756 5108 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 12 14:12:23 crc kubenswrapper[5108]: E1212 14:12:23.430848 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d8c95a75-0c3b-4caa-9b09-30c6dca73e72-metrics-certs podName:d8c95a75-0c3b-4caa-9b09-30c6dca73e72 nodeName:}" failed. No retries permitted until 2025-12-12 14:12:27.430827313 +0000 UTC m=+100.338818522 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d8c95a75-0c3b-4caa-9b09-30c6dca73e72-metrics-certs") pod "network-metrics-daemon-p4g92" (UID: "d8c95a75-0c3b-4caa-9b09-30c6dca73e72") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 12 14:12:23 crc kubenswrapper[5108]: I1212 14:12:23.529659 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:12:23 crc kubenswrapper[5108]: I1212 14:12:23.529709 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:12:23 crc kubenswrapper[5108]: I1212 14:12:23.529722 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:12:23 crc kubenswrapper[5108]: I1212 14:12:23.529739 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:12:23 crc kubenswrapper[5108]: I1212 14:12:23.529751 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:12:23Z","lastTransitionTime":"2025-12-12T14:12:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:12:23 crc kubenswrapper[5108]: I1212 14:12:23.632819 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:12:23 crc kubenswrapper[5108]: I1212 14:12:23.632901 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:12:23 crc kubenswrapper[5108]: I1212 14:12:23.632916 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:12:23 crc kubenswrapper[5108]: I1212 14:12:23.632939 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:12:23 crc kubenswrapper[5108]: I1212 14:12:23.632958 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:12:23Z","lastTransitionTime":"2025-12-12T14:12:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:12:23 crc kubenswrapper[5108]: I1212 14:12:23.735109 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:12:23 crc kubenswrapper[5108]: I1212 14:12:23.735162 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:12:23 crc kubenswrapper[5108]: I1212 14:12:23.735174 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:12:23 crc kubenswrapper[5108]: I1212 14:12:23.735211 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:12:23 crc kubenswrapper[5108]: I1212 14:12:23.735243 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:12:23Z","lastTransitionTime":"2025-12-12T14:12:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:12:23 crc kubenswrapper[5108]: I1212 14:12:23.805533 5108 generic.go:358] "Generic (PLEG): container finished" podID="65a61526-11c5-4c70-ae85-c126f893efd8" containerID="2a009df279a129aa7885c8ac11c8af27087f46be52e10490ea24ad3aca739015" exitCode=0 Dec 12 14:12:23 crc kubenswrapper[5108]: I1212 14:12:23.805629 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ctxlm" event={"ID":"65a61526-11c5-4c70-ae85-c126f893efd8","Type":"ContainerDied","Data":"2a009df279a129aa7885c8ac11c8af27087f46be52e10490ea24ad3aca739015"} Dec 12 14:12:23 crc kubenswrapper[5108]: I1212 14:12:23.824407 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69wzc" event={"ID":"934d8f16-46da-4779-8ab8-31b05d1e8b5c","Type":"ContainerStarted","Data":"e77382ed2a634eba38b927f4046daeb8627465aaa3f0f1328f36300bd391925d"} Dec 12 14:12:23 crc kubenswrapper[5108]: I1212 14:12:23.825664 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" event={"ID":"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc","Type":"ContainerStarted","Data":"e1e650be106461e47c05adf128c842f3e7d61c6d02df87f1d98cc5ac42212ce7"} Dec 12 14:12:23 crc kubenswrapper[5108]: I1212 14:12:23.838052 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:12:23 crc kubenswrapper[5108]: I1212 14:12:23.838113 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:12:23 crc kubenswrapper[5108]: I1212 14:12:23.838125 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:12:23 crc kubenswrapper[5108]: I1212 14:12:23.838139 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:12:23 crc kubenswrapper[5108]: I1212 14:12:23.838151 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:12:23Z","lastTransitionTime":"2025-12-12T14:12:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:12:23 crc kubenswrapper[5108]: I1212 14:12:23.950900 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:12:23 crc kubenswrapper[5108]: I1212 14:12:23.950948 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:12:23 crc kubenswrapper[5108]: I1212 14:12:23.950961 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:12:23 crc kubenswrapper[5108]: I1212 14:12:23.950977 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:12:23 crc kubenswrapper[5108]: I1212 14:12:23.950988 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:12:23Z","lastTransitionTime":"2025-12-12T14:12:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:12:24 crc kubenswrapper[5108]: I1212 14:12:24.056250 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:12:24 crc kubenswrapper[5108]: I1212 14:12:24.056415 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:12:24 crc kubenswrapper[5108]: I1212 14:12:24.056437 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:12:24 crc kubenswrapper[5108]: I1212 14:12:24.056459 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:12:24 crc kubenswrapper[5108]: I1212 14:12:24.056471 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:12:24Z","lastTransitionTime":"2025-12-12T14:12:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:12:24 crc kubenswrapper[5108]: I1212 14:12:24.158974 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:12:24 crc kubenswrapper[5108]: I1212 14:12:24.159022 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:12:24 crc kubenswrapper[5108]: I1212 14:12:24.159035 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:12:24 crc kubenswrapper[5108]: I1212 14:12:24.159051 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:12:24 crc kubenswrapper[5108]: I1212 14:12:24.159066 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:12:24Z","lastTransitionTime":"2025-12-12T14:12:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:12:24 crc kubenswrapper[5108]: I1212 14:12:24.260969 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:12:24 crc kubenswrapper[5108]: I1212 14:12:24.261006 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:12:24 crc kubenswrapper[5108]: I1212 14:12:24.261015 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:12:24 crc kubenswrapper[5108]: I1212 14:12:24.261027 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:12:24 crc kubenswrapper[5108]: I1212 14:12:24.261036 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:12:24Z","lastTransitionTime":"2025-12-12T14:12:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:12:24 crc kubenswrapper[5108]: I1212 14:12:24.363885 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:12:24 crc kubenswrapper[5108]: I1212 14:12:24.363960 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:12:24 crc kubenswrapper[5108]: I1212 14:12:24.363983 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:12:24 crc kubenswrapper[5108]: I1212 14:12:24.364012 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:12:24 crc kubenswrapper[5108]: I1212 14:12:24.364034 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:12:24Z","lastTransitionTime":"2025-12-12T14:12:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:12:24 crc kubenswrapper[5108]: I1212 14:12:24.466693 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:12:24 crc kubenswrapper[5108]: I1212 14:12:24.466756 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:12:24 crc kubenswrapper[5108]: I1212 14:12:24.466778 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:12:24 crc kubenswrapper[5108]: I1212 14:12:24.466812 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:12:24 crc kubenswrapper[5108]: I1212 14:12:24.466835 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:12:24Z","lastTransitionTime":"2025-12-12T14:12:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:12:24 crc kubenswrapper[5108]: I1212 14:12:24.570023 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:12:24 crc kubenswrapper[5108]: I1212 14:12:24.570058 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:12:24 crc kubenswrapper[5108]: I1212 14:12:24.570068 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:12:24 crc kubenswrapper[5108]: I1212 14:12:24.570105 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:12:24 crc kubenswrapper[5108]: I1212 14:12:24.570125 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:12:24Z","lastTransitionTime":"2025-12-12T14:12:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:12:24 crc kubenswrapper[5108]: I1212 14:12:24.672269 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:12:24 crc kubenswrapper[5108]: I1212 14:12:24.672312 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:12:24 crc kubenswrapper[5108]: I1212 14:12:24.672321 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:12:24 crc kubenswrapper[5108]: I1212 14:12:24.672336 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:12:24 crc kubenswrapper[5108]: I1212 14:12:24.672346 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:12:24Z","lastTransitionTime":"2025-12-12T14:12:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:12:24 crc kubenswrapper[5108]: I1212 14:12:24.775293 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:12:24 crc kubenswrapper[5108]: I1212 14:12:24.775373 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:12:24 crc kubenswrapper[5108]: I1212 14:12:24.775388 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:12:24 crc kubenswrapper[5108]: I1212 14:12:24.775406 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:12:24 crc kubenswrapper[5108]: I1212 14:12:24.775419 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:12:24Z","lastTransitionTime":"2025-12-12T14:12:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:12:24 crc kubenswrapper[5108]: I1212 14:12:24.833556 5108 generic.go:358] "Generic (PLEG): container finished" podID="65a61526-11c5-4c70-ae85-c126f893efd8" containerID="defc42bcb2a8112aaca1d8ceafaf200fbd8c7d6467eb32cd9de64fbbfd6145cb" exitCode=0 Dec 12 14:12:24 crc kubenswrapper[5108]: I1212 14:12:24.833623 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ctxlm" event={"ID":"65a61526-11c5-4c70-ae85-c126f893efd8","Type":"ContainerDied","Data":"defc42bcb2a8112aaca1d8ceafaf200fbd8c7d6467eb32cd9de64fbbfd6145cb"} Dec 12 14:12:24 crc kubenswrapper[5108]: I1212 14:12:24.878043 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:12:24 crc kubenswrapper[5108]: I1212 14:12:24.878128 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:12:24 crc kubenswrapper[5108]: I1212 14:12:24.878137 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:12:24 crc kubenswrapper[5108]: I1212 14:12:24.878150 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:12:24 crc kubenswrapper[5108]: I1212 14:12:24.878158 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:12:24Z","lastTransitionTime":"2025-12-12T14:12:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:12:24 crc kubenswrapper[5108]: I1212 14:12:24.980782 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:12:24 crc kubenswrapper[5108]: I1212 14:12:24.980833 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:12:24 crc kubenswrapper[5108]: I1212 14:12:24.980845 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:12:24 crc kubenswrapper[5108]: I1212 14:12:24.980891 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:12:24 crc kubenswrapper[5108]: I1212 14:12:24.980907 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:12:24Z","lastTransitionTime":"2025-12-12T14:12:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:12:25 crc kubenswrapper[5108]: I1212 14:12:25.083907 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:12:25 crc kubenswrapper[5108]: I1212 14:12:25.083958 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:12:25 crc kubenswrapper[5108]: I1212 14:12:25.083970 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:12:25 crc kubenswrapper[5108]: I1212 14:12:25.083989 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:12:25 crc kubenswrapper[5108]: I1212 14:12:25.084000 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:12:25Z","lastTransitionTime":"2025-12-12T14:12:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:12:25 crc kubenswrapper[5108]: I1212 14:12:25.187126 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:12:25 crc kubenswrapper[5108]: I1212 14:12:25.187175 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:12:25 crc kubenswrapper[5108]: I1212 14:12:25.187184 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:12:25 crc kubenswrapper[5108]: I1212 14:12:25.187200 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:12:25 crc kubenswrapper[5108]: I1212 14:12:25.187213 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:12:25Z","lastTransitionTime":"2025-12-12T14:12:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:12:25 crc kubenswrapper[5108]: I1212 14:12:25.289299 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:12:25 crc kubenswrapper[5108]: I1212 14:12:25.289349 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:12:25 crc kubenswrapper[5108]: I1212 14:12:25.289367 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:12:25 crc kubenswrapper[5108]: I1212 14:12:25.289388 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:12:25 crc kubenswrapper[5108]: I1212 14:12:25.289405 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:12:25Z","lastTransitionTime":"2025-12-12T14:12:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:12:25 crc kubenswrapper[5108]: I1212 14:12:25.391442 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:12:25 crc kubenswrapper[5108]: I1212 14:12:25.391479 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:12:25 crc kubenswrapper[5108]: I1212 14:12:25.391490 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:12:25 crc kubenswrapper[5108]: I1212 14:12:25.391504 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:12:25 crc kubenswrapper[5108]: I1212 14:12:25.391513 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:12:25Z","lastTransitionTime":"2025-12-12T14:12:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:12:25 crc kubenswrapper[5108]: I1212 14:12:25.406389 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 14:12:25 crc kubenswrapper[5108]: E1212 14:12:25.406496 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 12 14:12:25 crc kubenswrapper[5108]: I1212 14:12:25.407759 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p4g92" Dec 12 14:12:25 crc kubenswrapper[5108]: E1212 14:12:25.407839 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p4g92" podUID="d8c95a75-0c3b-4caa-9b09-30c6dca73e72" Dec 12 14:12:25 crc kubenswrapper[5108]: I1212 14:12:25.407887 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 12 14:12:25 crc kubenswrapper[5108]: E1212 14:12:25.407929 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 12 14:12:25 crc kubenswrapper[5108]: I1212 14:12:25.407999 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 14:12:25 crc kubenswrapper[5108]: E1212 14:12:25.408160 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 12 14:12:25 crc kubenswrapper[5108]: I1212 14:12:25.493661 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:12:25 crc kubenswrapper[5108]: I1212 14:12:25.493704 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:12:25 crc kubenswrapper[5108]: I1212 14:12:25.493713 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:12:25 crc kubenswrapper[5108]: I1212 14:12:25.493728 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:12:25 crc kubenswrapper[5108]: I1212 14:12:25.493737 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:12:25Z","lastTransitionTime":"2025-12-12T14:12:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:12:25 crc kubenswrapper[5108]: I1212 14:12:25.595535 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:12:25 crc kubenswrapper[5108]: I1212 14:12:25.595571 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:12:25 crc kubenswrapper[5108]: I1212 14:12:25.595579 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:12:25 crc kubenswrapper[5108]: I1212 14:12:25.595591 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:12:25 crc kubenswrapper[5108]: I1212 14:12:25.595600 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:12:25Z","lastTransitionTime":"2025-12-12T14:12:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:12:25 crc kubenswrapper[5108]: I1212 14:12:25.697831 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:12:25 crc kubenswrapper[5108]: I1212 14:12:25.698176 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:12:25 crc kubenswrapper[5108]: I1212 14:12:25.698186 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:12:25 crc kubenswrapper[5108]: I1212 14:12:25.698201 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:12:25 crc kubenswrapper[5108]: I1212 14:12:25.698212 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:12:25Z","lastTransitionTime":"2025-12-12T14:12:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:12:25 crc kubenswrapper[5108]: I1212 14:12:25.800263 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:12:25 crc kubenswrapper[5108]: I1212 14:12:25.800329 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:12:25 crc kubenswrapper[5108]: I1212 14:12:25.800345 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:12:25 crc kubenswrapper[5108]: I1212 14:12:25.800371 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:12:25 crc kubenswrapper[5108]: I1212 14:12:25.800387 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:12:25Z","lastTransitionTime":"2025-12-12T14:12:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:12:25 crc kubenswrapper[5108]: I1212 14:12:25.844752 5108 generic.go:358] "Generic (PLEG): container finished" podID="65a61526-11c5-4c70-ae85-c126f893efd8" containerID="c892e13478f1d378da1035fc690b2d3cd1a140d07a178c71ef93bb8cd951ea6e" exitCode=0 Dec 12 14:12:25 crc kubenswrapper[5108]: I1212 14:12:25.844848 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ctxlm" event={"ID":"65a61526-11c5-4c70-ae85-c126f893efd8","Type":"ContainerDied","Data":"c892e13478f1d378da1035fc690b2d3cd1a140d07a178c71ef93bb8cd951ea6e"} Dec 12 14:12:25 crc kubenswrapper[5108]: I1212 14:12:25.851947 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69wzc" event={"ID":"934d8f16-46da-4779-8ab8-31b05d1e8b5c","Type":"ContainerStarted","Data":"3904bdd05696ca809605e7ff25066a563efbdf5d7e944cc4cc56b32b255f428e"} Dec 12 14:12:25 crc kubenswrapper[5108]: I1212 14:12:25.852462 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-69wzc" Dec 12 14:12:25 crc kubenswrapper[5108]: I1212 14:12:25.852531 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-69wzc" Dec 12 14:12:25 crc kubenswrapper[5108]: I1212 14:12:25.852555 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-69wzc" Dec 12 14:12:25 crc kubenswrapper[5108]: I1212 14:12:25.883187 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-69wzc" Dec 12 14:12:25 crc kubenswrapper[5108]: I1212 14:12:25.888553 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-69wzc" Dec 12 14:12:25 crc kubenswrapper[5108]: I1212 14:12:25.902997 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:12:25 crc kubenswrapper[5108]: I1212 14:12:25.903055 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:12:25 crc kubenswrapper[5108]: I1212 14:12:25.903068 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:12:25 crc kubenswrapper[5108]: I1212 14:12:25.903103 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:12:25 crc kubenswrapper[5108]: I1212 14:12:25.903117 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:12:25Z","lastTransitionTime":"2025-12-12T14:12:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:12:25 crc kubenswrapper[5108]: I1212 14:12:25.908643 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-69wzc" podStartSLOduration=78.908622663 podStartE2EDuration="1m18.908622663s" podCreationTimestamp="2025-12-12 14:11:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 14:12:25.906804755 +0000 UTC m=+98.814795924" watchObservedRunningTime="2025-12-12 14:12:25.908622663 +0000 UTC m=+98.816613832" Dec 12 14:12:26 crc kubenswrapper[5108]: I1212 14:12:26.005738 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:12:26 crc kubenswrapper[5108]: I1212 14:12:26.005776 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:12:26 crc kubenswrapper[5108]: I1212 14:12:26.005786 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:12:26 crc kubenswrapper[5108]: I1212 14:12:26.005805 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:12:26 crc kubenswrapper[5108]: I1212 14:12:26.005815 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:12:26Z","lastTransitionTime":"2025-12-12T14:12:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:12:26 crc kubenswrapper[5108]: I1212 14:12:26.109763 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:12:26 crc kubenswrapper[5108]: I1212 14:12:26.109816 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:12:26 crc kubenswrapper[5108]: I1212 14:12:26.109835 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:12:26 crc kubenswrapper[5108]: I1212 14:12:26.109855 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:12:26 crc kubenswrapper[5108]: I1212 14:12:26.109868 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:12:26Z","lastTransitionTime":"2025-12-12T14:12:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:12:26 crc kubenswrapper[5108]: I1212 14:12:26.214066 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:12:26 crc kubenswrapper[5108]: I1212 14:12:26.214132 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:12:26 crc kubenswrapper[5108]: I1212 14:12:26.214146 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:12:26 crc kubenswrapper[5108]: I1212 14:12:26.214164 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:12:26 crc kubenswrapper[5108]: I1212 14:12:26.214175 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:12:26Z","lastTransitionTime":"2025-12-12T14:12:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:12:26 crc kubenswrapper[5108]: I1212 14:12:26.315689 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:12:26 crc kubenswrapper[5108]: I1212 14:12:26.315737 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:12:26 crc kubenswrapper[5108]: I1212 14:12:26.315749 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:12:26 crc kubenswrapper[5108]: I1212 14:12:26.315764 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:12:26 crc kubenswrapper[5108]: I1212 14:12:26.315775 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:12:26Z","lastTransitionTime":"2025-12-12T14:12:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:12:26 crc kubenswrapper[5108]: I1212 14:12:26.417221 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:12:26 crc kubenswrapper[5108]: I1212 14:12:26.417268 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:12:26 crc kubenswrapper[5108]: I1212 14:12:26.417280 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:12:26 crc kubenswrapper[5108]: I1212 14:12:26.417295 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:12:26 crc kubenswrapper[5108]: I1212 14:12:26.417308 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:12:26Z","lastTransitionTime":"2025-12-12T14:12:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:12:26 crc kubenswrapper[5108]: I1212 14:12:26.520134 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:12:26 crc kubenswrapper[5108]: I1212 14:12:26.520181 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:12:26 crc kubenswrapper[5108]: I1212 14:12:26.520196 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:12:26 crc kubenswrapper[5108]: I1212 14:12:26.520214 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:12:26 crc kubenswrapper[5108]: I1212 14:12:26.520226 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:12:26Z","lastTransitionTime":"2025-12-12T14:12:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:12:26 crc kubenswrapper[5108]: I1212 14:12:26.622525 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:12:26 crc kubenswrapper[5108]: I1212 14:12:26.622589 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:12:26 crc kubenswrapper[5108]: I1212 14:12:26.622606 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:12:26 crc kubenswrapper[5108]: I1212 14:12:26.622625 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:12:26 crc kubenswrapper[5108]: I1212 14:12:26.622638 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:12:26Z","lastTransitionTime":"2025-12-12T14:12:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:12:26 crc kubenswrapper[5108]: I1212 14:12:26.725215 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:12:26 crc kubenswrapper[5108]: I1212 14:12:26.725267 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:12:26 crc kubenswrapper[5108]: I1212 14:12:26.725279 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:12:26 crc kubenswrapper[5108]: I1212 14:12:26.725296 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:12:26 crc kubenswrapper[5108]: I1212 14:12:26.725308 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:12:26Z","lastTransitionTime":"2025-12-12T14:12:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:12:26 crc kubenswrapper[5108]: I1212 14:12:26.827619 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:12:26 crc kubenswrapper[5108]: I1212 14:12:26.827670 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:12:26 crc kubenswrapper[5108]: I1212 14:12:26.827682 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:12:26 crc kubenswrapper[5108]: I1212 14:12:26.827699 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:12:26 crc kubenswrapper[5108]: I1212 14:12:26.827712 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:12:26Z","lastTransitionTime":"2025-12-12T14:12:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:12:26 crc kubenswrapper[5108]: I1212 14:12:26.930203 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:12:26 crc kubenswrapper[5108]: I1212 14:12:26.930292 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:12:26 crc kubenswrapper[5108]: I1212 14:12:26.930321 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:12:26 crc kubenswrapper[5108]: I1212 14:12:26.930353 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:12:26 crc kubenswrapper[5108]: I1212 14:12:26.930377 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:12:26Z","lastTransitionTime":"2025-12-12T14:12:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:12:27 crc kubenswrapper[5108]: I1212 14:12:27.032749 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:12:27 crc kubenswrapper[5108]: I1212 14:12:27.032795 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:12:27 crc kubenswrapper[5108]: I1212 14:12:27.032807 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:12:27 crc kubenswrapper[5108]: I1212 14:12:27.032824 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:12:27 crc kubenswrapper[5108]: I1212 14:12:27.032834 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:12:27Z","lastTransitionTime":"2025-12-12T14:12:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:12:27 crc kubenswrapper[5108]: I1212 14:12:27.134870 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:12:27 crc kubenswrapper[5108]: I1212 14:12:27.134908 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:12:27 crc kubenswrapper[5108]: I1212 14:12:27.134917 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:12:27 crc kubenswrapper[5108]: I1212 14:12:27.134930 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:12:27 crc kubenswrapper[5108]: I1212 14:12:27.134940 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:12:27Z","lastTransitionTime":"2025-12-12T14:12:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:12:27 crc kubenswrapper[5108]: I1212 14:12:27.173840 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 14:12:27 crc kubenswrapper[5108]: I1212 14:12:27.173896 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 14:12:27 crc kubenswrapper[5108]: I1212 14:12:27.173920 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 14:12:27 crc kubenswrapper[5108]: I1212 14:12:27.173938 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 12 14:12:27 crc kubenswrapper[5108]: E1212 14:12:27.174051 5108 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 12 14:12:27 crc kubenswrapper[5108]: E1212 14:12:27.174064 5108 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 12 14:12:27 crc kubenswrapper[5108]: E1212 14:12:27.174073 5108 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 14:12:27 crc kubenswrapper[5108]: E1212 14:12:27.174155 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2025-12-12 14:12:35.174139554 +0000 UTC m=+108.082130713 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 14:12:27 crc kubenswrapper[5108]: E1212 14:12:27.174398 5108 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 12 14:12:27 crc kubenswrapper[5108]: E1212 14:12:27.174437 5108 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 12 14:12:27 crc kubenswrapper[5108]: E1212 14:12:27.174458 5108 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 14:12:27 crc kubenswrapper[5108]: E1212 14:12:27.174489 5108 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 12 14:12:27 crc kubenswrapper[5108]: E1212 14:12:27.174517 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-12 14:12:35.174509313 +0000 UTC m=+108.082500472 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 12 14:12:27 crc kubenswrapper[5108]: E1212 14:12:27.174530 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2025-12-12 14:12:35.174524224 +0000 UTC m=+108.082515383 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 14:12:27 crc kubenswrapper[5108]: E1212 14:12:27.174588 5108 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 12 14:12:27 crc kubenswrapper[5108]: E1212 14:12:27.174672 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-12 14:12:35.174640028 +0000 UTC m=+108.082631277 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 12 14:12:27 crc kubenswrapper[5108]: I1212 14:12:27.236872 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:12:27 crc kubenswrapper[5108]: I1212 14:12:27.236912 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:12:27 crc kubenswrapper[5108]: I1212 14:12:27.236925 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:12:27 crc kubenswrapper[5108]: I1212 14:12:27.236941 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:12:27 crc kubenswrapper[5108]: I1212 14:12:27.236953 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:12:27Z","lastTransitionTime":"2025-12-12T14:12:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:12:27 crc kubenswrapper[5108]: I1212 14:12:27.338940 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:12:27 crc kubenswrapper[5108]: I1212 14:12:27.338989 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:12:27 crc kubenswrapper[5108]: I1212 14:12:27.339005 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:12:27 crc kubenswrapper[5108]: I1212 14:12:27.339026 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:12:27 crc kubenswrapper[5108]: I1212 14:12:27.339040 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:12:27Z","lastTransitionTime":"2025-12-12T14:12:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:12:27 crc kubenswrapper[5108]: I1212 14:12:27.411582 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 14:12:27 crc kubenswrapper[5108]: I1212 14:12:27.411623 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 14:12:27 crc kubenswrapper[5108]: I1212 14:12:27.411659 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p4g92" Dec 12 14:12:27 crc kubenswrapper[5108]: E1212 14:12:27.411767 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 12 14:12:27 crc kubenswrapper[5108]: E1212 14:12:27.411849 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 12 14:12:27 crc kubenswrapper[5108]: I1212 14:12:27.411883 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 12 14:12:27 crc kubenswrapper[5108]: E1212 14:12:27.411942 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p4g92" podUID="d8c95a75-0c3b-4caa-9b09-30c6dca73e72" Dec 12 14:12:27 crc kubenswrapper[5108]: E1212 14:12:27.412070 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 12 14:12:27 crc kubenswrapper[5108]: I1212 14:12:27.441720 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:12:27 crc kubenswrapper[5108]: I1212 14:12:27.441790 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:12:27 crc kubenswrapper[5108]: I1212 14:12:27.441800 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:12:27 crc kubenswrapper[5108]: I1212 14:12:27.441825 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:12:27 crc kubenswrapper[5108]: I1212 14:12:27.441836 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:12:27Z","lastTransitionTime":"2025-12-12T14:12:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:12:27 crc kubenswrapper[5108]: I1212 14:12:27.501367 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:27 crc kubenswrapper[5108]: I1212 14:12:27.501500 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d8c95a75-0c3b-4caa-9b09-30c6dca73e72-metrics-certs\") pod \"network-metrics-daemon-p4g92\" (UID: \"d8c95a75-0c3b-4caa-9b09-30c6dca73e72\") " pod="openshift-multus/network-metrics-daemon-p4g92" Dec 12 14:12:27 crc kubenswrapper[5108]: E1212 14:12:27.501679 5108 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 12 14:12:27 crc kubenswrapper[5108]: E1212 14:12:27.501744 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d8c95a75-0c3b-4caa-9b09-30c6dca73e72-metrics-certs podName:d8c95a75-0c3b-4caa-9b09-30c6dca73e72 nodeName:}" failed. No retries permitted until 2025-12-12 14:12:35.501725774 +0000 UTC m=+108.409716933 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d8c95a75-0c3b-4caa-9b09-30c6dca73e72-metrics-certs") pod "network-metrics-daemon-p4g92" (UID: "d8c95a75-0c3b-4caa-9b09-30c6dca73e72") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 12 14:12:27 crc kubenswrapper[5108]: E1212 14:12:27.502172 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:35.502159166 +0000 UTC m=+108.410150335 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:27 crc kubenswrapper[5108]: I1212 14:12:27.545356 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:12:27 crc kubenswrapper[5108]: I1212 14:12:27.545426 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:12:27 crc kubenswrapper[5108]: I1212 14:12:27.545438 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:12:27 crc kubenswrapper[5108]: I1212 14:12:27.545454 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:12:27 crc kubenswrapper[5108]: I1212 14:12:27.545466 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:12:27Z","lastTransitionTime":"2025-12-12T14:12:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:12:27 crc kubenswrapper[5108]: I1212 14:12:27.648448 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:12:27 crc kubenswrapper[5108]: I1212 14:12:27.648505 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:12:27 crc kubenswrapper[5108]: I1212 14:12:27.648515 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:12:27 crc kubenswrapper[5108]: I1212 14:12:27.648533 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:12:27 crc kubenswrapper[5108]: I1212 14:12:27.648545 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:12:27Z","lastTransitionTime":"2025-12-12T14:12:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:12:27 crc kubenswrapper[5108]: I1212 14:12:27.750013 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:12:27 crc kubenswrapper[5108]: I1212 14:12:27.750054 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:12:27 crc kubenswrapper[5108]: I1212 14:12:27.750062 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:12:27 crc kubenswrapper[5108]: I1212 14:12:27.750109 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:12:27 crc kubenswrapper[5108]: I1212 14:12:27.750123 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:12:27Z","lastTransitionTime":"2025-12-12T14:12:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:12:27 crc kubenswrapper[5108]: I1212 14:12:27.852239 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:12:27 crc kubenswrapper[5108]: I1212 14:12:27.852303 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:12:27 crc kubenswrapper[5108]: I1212 14:12:27.852313 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:12:27 crc kubenswrapper[5108]: I1212 14:12:27.852327 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:12:27 crc kubenswrapper[5108]: I1212 14:12:27.852337 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:12:27Z","lastTransitionTime":"2025-12-12T14:12:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:12:27 crc kubenswrapper[5108]: I1212 14:12:27.868030 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ctxlm" event={"ID":"65a61526-11c5-4c70-ae85-c126f893efd8","Type":"ContainerStarted","Data":"8b67ae1058068cc1bc8ede61243d897adf11be7cae42678a642c90eb09f48ee4"} Dec 12 14:12:27 crc kubenswrapper[5108]: I1212 14:12:27.955023 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:12:27 crc kubenswrapper[5108]: I1212 14:12:27.955137 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:12:27 crc kubenswrapper[5108]: I1212 14:12:27.955159 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:12:27 crc kubenswrapper[5108]: I1212 14:12:27.955177 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:12:27 crc kubenswrapper[5108]: I1212 14:12:27.955189 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:12:27Z","lastTransitionTime":"2025-12-12T14:12:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:12:27 crc kubenswrapper[5108]: I1212 14:12:27.990966 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-p4g92"] Dec 12 14:12:27 crc kubenswrapper[5108]: I1212 14:12:27.991124 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p4g92" Dec 12 14:12:27 crc kubenswrapper[5108]: E1212 14:12:27.991241 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p4g92" podUID="d8c95a75-0c3b-4caa-9b09-30c6dca73e72" Dec 12 14:12:28 crc kubenswrapper[5108]: I1212 14:12:28.059798 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:12:28 crc kubenswrapper[5108]: I1212 14:12:28.059845 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:12:28 crc kubenswrapper[5108]: I1212 14:12:28.059860 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:12:28 crc kubenswrapper[5108]: I1212 14:12:28.059879 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:12:28 crc kubenswrapper[5108]: I1212 14:12:28.059895 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:12:28Z","lastTransitionTime":"2025-12-12T14:12:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:12:28 crc kubenswrapper[5108]: I1212 14:12:28.162062 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:12:28 crc kubenswrapper[5108]: I1212 14:12:28.162114 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:12:28 crc kubenswrapper[5108]: I1212 14:12:28.162126 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:12:28 crc kubenswrapper[5108]: I1212 14:12:28.162138 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:12:28 crc kubenswrapper[5108]: I1212 14:12:28.162148 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:12:28Z","lastTransitionTime":"2025-12-12T14:12:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:12:28 crc kubenswrapper[5108]: I1212 14:12:28.264002 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:12:28 crc kubenswrapper[5108]: I1212 14:12:28.264060 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:12:28 crc kubenswrapper[5108]: I1212 14:12:28.264109 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:12:28 crc kubenswrapper[5108]: I1212 14:12:28.264141 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:12:28 crc kubenswrapper[5108]: I1212 14:12:28.264160 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:12:28Z","lastTransitionTime":"2025-12-12T14:12:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:12:28 crc kubenswrapper[5108]: I1212 14:12:28.366815 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:12:28 crc kubenswrapper[5108]: I1212 14:12:28.366894 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:12:28 crc kubenswrapper[5108]: I1212 14:12:28.366907 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:12:28 crc kubenswrapper[5108]: I1212 14:12:28.366925 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:12:28 crc kubenswrapper[5108]: I1212 14:12:28.366958 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:12:28Z","lastTransitionTime":"2025-12-12T14:12:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:12:28 crc kubenswrapper[5108]: I1212 14:12:28.469286 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:12:28 crc kubenswrapper[5108]: I1212 14:12:28.469349 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:12:28 crc kubenswrapper[5108]: I1212 14:12:28.469358 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:12:28 crc kubenswrapper[5108]: I1212 14:12:28.469371 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:12:28 crc kubenswrapper[5108]: I1212 14:12:28.469380 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:12:28Z","lastTransitionTime":"2025-12-12T14:12:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:12:28 crc kubenswrapper[5108]: I1212 14:12:28.571569 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:12:28 crc kubenswrapper[5108]: I1212 14:12:28.571609 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:12:28 crc kubenswrapper[5108]: I1212 14:12:28.571621 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:12:28 crc kubenswrapper[5108]: I1212 14:12:28.571636 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:12:28 crc kubenswrapper[5108]: I1212 14:12:28.571647 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:12:28Z","lastTransitionTime":"2025-12-12T14:12:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:12:28 crc kubenswrapper[5108]: I1212 14:12:28.674209 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:12:28 crc kubenswrapper[5108]: I1212 14:12:28.674257 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:12:28 crc kubenswrapper[5108]: I1212 14:12:28.674270 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:12:28 crc kubenswrapper[5108]: I1212 14:12:28.674285 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:12:28 crc kubenswrapper[5108]: I1212 14:12:28.674296 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:12:28Z","lastTransitionTime":"2025-12-12T14:12:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:12:28 crc kubenswrapper[5108]: I1212 14:12:28.777574 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:12:28 crc kubenswrapper[5108]: I1212 14:12:28.777639 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:12:28 crc kubenswrapper[5108]: I1212 14:12:28.777650 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:12:28 crc kubenswrapper[5108]: I1212 14:12:28.777674 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:12:28 crc kubenswrapper[5108]: I1212 14:12:28.777699 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:12:28Z","lastTransitionTime":"2025-12-12T14:12:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:12:28 crc kubenswrapper[5108]: I1212 14:12:28.879637 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:12:28 crc kubenswrapper[5108]: I1212 14:12:28.879700 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:12:28 crc kubenswrapper[5108]: I1212 14:12:28.879711 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:12:28 crc kubenswrapper[5108]: I1212 14:12:28.879732 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:12:28 crc kubenswrapper[5108]: I1212 14:12:28.879744 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:12:28Z","lastTransitionTime":"2025-12-12T14:12:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:12:28 crc kubenswrapper[5108]: I1212 14:12:28.896477 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-ctxlm" podStartSLOduration=81.896450271 podStartE2EDuration="1m21.896450271s" podCreationTimestamp="2025-12-12 14:11:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 14:12:28.892628549 +0000 UTC m=+101.800619728" watchObservedRunningTime="2025-12-12 14:12:28.896450271 +0000 UTC m=+101.804441450" Dec 12 14:12:28 crc kubenswrapper[5108]: I1212 14:12:28.981765 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:12:28 crc kubenswrapper[5108]: I1212 14:12:28.981820 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:12:28 crc kubenswrapper[5108]: I1212 14:12:28.981831 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:12:28 crc kubenswrapper[5108]: I1212 14:12:28.981849 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:12:28 crc kubenswrapper[5108]: I1212 14:12:28.981860 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:12:28Z","lastTransitionTime":"2025-12-12T14:12:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:12:29 crc kubenswrapper[5108]: I1212 14:12:29.083606 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:12:29 crc kubenswrapper[5108]: I1212 14:12:29.083660 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:12:29 crc kubenswrapper[5108]: I1212 14:12:29.083673 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:12:29 crc kubenswrapper[5108]: I1212 14:12:29.083691 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:12:29 crc kubenswrapper[5108]: I1212 14:12:29.083703 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:12:29Z","lastTransitionTime":"2025-12-12T14:12:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:12:29 crc kubenswrapper[5108]: I1212 14:12:29.185528 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:12:29 crc kubenswrapper[5108]: I1212 14:12:29.185586 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:12:29 crc kubenswrapper[5108]: I1212 14:12:29.185600 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:12:29 crc kubenswrapper[5108]: I1212 14:12:29.185618 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:12:29 crc kubenswrapper[5108]: I1212 14:12:29.185633 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:12:29Z","lastTransitionTime":"2025-12-12T14:12:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:12:29 crc kubenswrapper[5108]: I1212 14:12:29.292115 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:12:29 crc kubenswrapper[5108]: I1212 14:12:29.292176 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:12:29 crc kubenswrapper[5108]: I1212 14:12:29.292186 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:12:29 crc kubenswrapper[5108]: I1212 14:12:29.292204 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:12:29 crc kubenswrapper[5108]: I1212 14:12:29.292215 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:12:29Z","lastTransitionTime":"2025-12-12T14:12:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:12:29 crc kubenswrapper[5108]: I1212 14:12:29.394927 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:12:29 crc kubenswrapper[5108]: I1212 14:12:29.395013 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:12:29 crc kubenswrapper[5108]: I1212 14:12:29.395031 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:12:29 crc kubenswrapper[5108]: I1212 14:12:29.395055 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:12:29 crc kubenswrapper[5108]: I1212 14:12:29.395072 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:12:29Z","lastTransitionTime":"2025-12-12T14:12:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:12:29 crc kubenswrapper[5108]: I1212 14:12:29.407282 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 14:12:29 crc kubenswrapper[5108]: I1212 14:12:29.407317 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p4g92" Dec 12 14:12:29 crc kubenswrapper[5108]: I1212 14:12:29.407407 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 14:12:29 crc kubenswrapper[5108]: E1212 14:12:29.407418 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 12 14:12:29 crc kubenswrapper[5108]: E1212 14:12:29.407501 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 12 14:12:29 crc kubenswrapper[5108]: E1212 14:12:29.407600 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p4g92" podUID="d8c95a75-0c3b-4caa-9b09-30c6dca73e72" Dec 12 14:12:29 crc kubenswrapper[5108]: I1212 14:12:29.407613 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 12 14:12:29 crc kubenswrapper[5108]: E1212 14:12:29.407810 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 12 14:12:29 crc kubenswrapper[5108]: I1212 14:12:29.497473 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:12:29 crc kubenswrapper[5108]: I1212 14:12:29.497513 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:12:29 crc kubenswrapper[5108]: I1212 14:12:29.497522 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:12:29 crc kubenswrapper[5108]: I1212 14:12:29.497534 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:12:29 crc kubenswrapper[5108]: I1212 14:12:29.497543 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:12:29Z","lastTransitionTime":"2025-12-12T14:12:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:12:29 crc kubenswrapper[5108]: I1212 14:12:29.599244 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:12:29 crc kubenswrapper[5108]: I1212 14:12:29.599284 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:12:29 crc kubenswrapper[5108]: I1212 14:12:29.599296 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:12:29 crc kubenswrapper[5108]: I1212 14:12:29.599309 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:12:29 crc kubenswrapper[5108]: I1212 14:12:29.599318 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:12:29Z","lastTransitionTime":"2025-12-12T14:12:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:12:29 crc kubenswrapper[5108]: I1212 14:12:29.701650 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:12:29 crc kubenswrapper[5108]: I1212 14:12:29.701895 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:12:29 crc kubenswrapper[5108]: I1212 14:12:29.702026 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:12:29 crc kubenswrapper[5108]: I1212 14:12:29.702119 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:12:29 crc kubenswrapper[5108]: I1212 14:12:29.702214 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:12:29Z","lastTransitionTime":"2025-12-12T14:12:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:12:29 crc kubenswrapper[5108]: I1212 14:12:29.803757 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:12:29 crc kubenswrapper[5108]: I1212 14:12:29.803799 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:12:29 crc kubenswrapper[5108]: I1212 14:12:29.803810 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:12:29 crc kubenswrapper[5108]: I1212 14:12:29.803835 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:12:29 crc kubenswrapper[5108]: I1212 14:12:29.803845 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:12:29Z","lastTransitionTime":"2025-12-12T14:12:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:12:29 crc kubenswrapper[5108]: I1212 14:12:29.905842 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:12:29 crc kubenswrapper[5108]: I1212 14:12:29.905889 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:12:29 crc kubenswrapper[5108]: I1212 14:12:29.905905 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:12:29 crc kubenswrapper[5108]: I1212 14:12:29.905925 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:12:29 crc kubenswrapper[5108]: I1212 14:12:29.905935 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:12:29Z","lastTransitionTime":"2025-12-12T14:12:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:12:29 crc kubenswrapper[5108]: I1212 14:12:29.937461 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:12:29 crc kubenswrapper[5108]: I1212 14:12:29.937496 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:12:29 crc kubenswrapper[5108]: I1212 14:12:29.937508 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:12:29 crc kubenswrapper[5108]: I1212 14:12:29.937523 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:12:29 crc kubenswrapper[5108]: I1212 14:12:29.937533 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:12:29Z","lastTransitionTime":"2025-12-12T14:12:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:12:29 crc kubenswrapper[5108]: I1212 14:12:29.984415 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-7c9b9cfd6-xhpw5"] Dec 12 14:12:30 crc kubenswrapper[5108]: I1212 14:12:30.411432 5108 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kubelet-serving" Dec 12 14:12:30 crc kubenswrapper[5108]: I1212 14:12:30.422962 5108 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Dec 12 14:12:31 crc kubenswrapper[5108]: I1212 14:12:31.397786 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 14:12:31 crc kubenswrapper[5108]: I1212 14:12:31.397904 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p4g92" Dec 12 14:12:31 crc kubenswrapper[5108]: I1212 14:12:31.397958 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 14:12:31 crc kubenswrapper[5108]: I1212 14:12:31.397933 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 12 14:12:31 crc kubenswrapper[5108]: E1212 14:12:31.397917 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 12 14:12:31 crc kubenswrapper[5108]: E1212 14:12:31.398112 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p4g92" podUID="d8c95a75-0c3b-4caa-9b09-30c6dca73e72" Dec 12 14:12:31 crc kubenswrapper[5108]: E1212 14:12:31.398174 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 12 14:12:31 crc kubenswrapper[5108]: E1212 14:12:31.398333 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 12 14:12:31 crc kubenswrapper[5108]: I1212 14:12:31.398660 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-xhpw5" Dec 12 14:12:31 crc kubenswrapper[5108]: I1212 14:12:31.400392 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"openshift-service-ca.crt\"" Dec 12 14:12:31 crc kubenswrapper[5108]: I1212 14:12:31.400814 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"cluster-version-operator-serving-cert\"" Dec 12 14:12:31 crc kubenswrapper[5108]: I1212 14:12:31.401255 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"kube-root-ca.crt\"" Dec 12 14:12:31 crc kubenswrapper[5108]: I1212 14:12:31.403124 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"default-dockercfg-hqpm5\"" Dec 12 14:12:31 crc kubenswrapper[5108]: I1212 14:12:31.544970 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/13f26732-b6d4-41aa-bde4-988f7178ea70-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-xhpw5\" (UID: \"13f26732-b6d4-41aa-bde4-988f7178ea70\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-xhpw5" Dec 12 14:12:31 crc kubenswrapper[5108]: I1212 14:12:31.545012 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/13f26732-b6d4-41aa-bde4-988f7178ea70-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-xhpw5\" (UID: \"13f26732-b6d4-41aa-bde4-988f7178ea70\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-xhpw5" Dec 12 14:12:31 crc kubenswrapper[5108]: I1212 14:12:31.545030 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/13f26732-b6d4-41aa-bde4-988f7178ea70-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-xhpw5\" (UID: \"13f26732-b6d4-41aa-bde4-988f7178ea70\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-xhpw5" Dec 12 14:12:31 crc kubenswrapper[5108]: I1212 14:12:31.545139 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/13f26732-b6d4-41aa-bde4-988f7178ea70-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-xhpw5\" (UID: \"13f26732-b6d4-41aa-bde4-988f7178ea70\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-xhpw5" Dec 12 14:12:31 crc kubenswrapper[5108]: I1212 14:12:31.545238 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/13f26732-b6d4-41aa-bde4-988f7178ea70-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-xhpw5\" (UID: \"13f26732-b6d4-41aa-bde4-988f7178ea70\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-xhpw5" Dec 12 14:12:31 crc kubenswrapper[5108]: I1212 14:12:31.645829 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/13f26732-b6d4-41aa-bde4-988f7178ea70-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-xhpw5\" (UID: \"13f26732-b6d4-41aa-bde4-988f7178ea70\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-xhpw5" Dec 12 14:12:31 crc kubenswrapper[5108]: I1212 14:12:31.645876 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/13f26732-b6d4-41aa-bde4-988f7178ea70-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-xhpw5\" (UID: \"13f26732-b6d4-41aa-bde4-988f7178ea70\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-xhpw5" Dec 12 14:12:31 crc kubenswrapper[5108]: I1212 14:12:31.645906 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/13f26732-b6d4-41aa-bde4-988f7178ea70-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-xhpw5\" (UID: \"13f26732-b6d4-41aa-bde4-988f7178ea70\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-xhpw5" Dec 12 14:12:31 crc kubenswrapper[5108]: I1212 14:12:31.645929 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/13f26732-b6d4-41aa-bde4-988f7178ea70-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-xhpw5\" (UID: \"13f26732-b6d4-41aa-bde4-988f7178ea70\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-xhpw5" Dec 12 14:12:31 crc kubenswrapper[5108]: I1212 14:12:31.645959 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/13f26732-b6d4-41aa-bde4-988f7178ea70-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-xhpw5\" (UID: \"13f26732-b6d4-41aa-bde4-988f7178ea70\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-xhpw5" Dec 12 14:12:31 crc kubenswrapper[5108]: I1212 14:12:31.646003 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/13f26732-b6d4-41aa-bde4-988f7178ea70-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-xhpw5\" (UID: \"13f26732-b6d4-41aa-bde4-988f7178ea70\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-xhpw5" Dec 12 14:12:31 crc kubenswrapper[5108]: I1212 14:12:31.645982 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/13f26732-b6d4-41aa-bde4-988f7178ea70-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-xhpw5\" (UID: \"13f26732-b6d4-41aa-bde4-988f7178ea70\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-xhpw5" Dec 12 14:12:31 crc kubenswrapper[5108]: I1212 14:12:31.656336 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/13f26732-b6d4-41aa-bde4-988f7178ea70-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-xhpw5\" (UID: \"13f26732-b6d4-41aa-bde4-988f7178ea70\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-xhpw5" Dec 12 14:12:31 crc kubenswrapper[5108]: I1212 14:12:31.672571 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/13f26732-b6d4-41aa-bde4-988f7178ea70-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-xhpw5\" (UID: \"13f26732-b6d4-41aa-bde4-988f7178ea70\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-xhpw5" Dec 12 14:12:32 crc kubenswrapper[5108]: I1212 14:12:32.004193 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/13f26732-b6d4-41aa-bde4-988f7178ea70-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-xhpw5\" (UID: \"13f26732-b6d4-41aa-bde4-988f7178ea70\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-xhpw5" Dec 12 14:12:32 crc kubenswrapper[5108]: I1212 14:12:32.023334 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-xhpw5" Dec 12 14:12:32 crc kubenswrapper[5108]: I1212 14:12:32.891684 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-xhpw5" event={"ID":"13f26732-b6d4-41aa-bde4-988f7178ea70","Type":"ContainerStarted","Data":"d12e6dac05cf93050537ecd25e84d37c88ab3ebdc447b6a2e27b789d9abf0357"} Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.407308 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.407427 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 14:12:33 crc kubenswrapper[5108]: E1212 14:12:33.407437 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.407451 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p4g92" Dec 12 14:12:33 crc kubenswrapper[5108]: E1212 14:12:33.407507 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 12 14:12:33 crc kubenswrapper[5108]: E1212 14:12:33.407603 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p4g92" podUID="d8c95a75-0c3b-4caa-9b09-30c6dca73e72" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.407651 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 14:12:33 crc kubenswrapper[5108]: E1212 14:12:33.407733 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.598533 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeReady" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.598877 5108 kubelet_node_status.go:550] "Fast updating node status as it just became ready" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.635589 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-tx2lf"] Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.919727 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-5k7p6"] Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.919844 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-tx2lf" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.922205 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-djmfg\"" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.922338 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\"" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.922379 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"config\"" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.922820 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"serving-cert\"" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.925424 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"client-ca\"" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.925766 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-ztlbz"] Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.925935 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-9ddfb9f55-5k7p6" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.926381 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\"" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.927427 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"audit-1\"" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.929687 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-54c688565-qwk4x"] Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.930433 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-755bb95488-ztlbz" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.930510 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"openshift-service-ca.crt\"" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.931046 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"etcd-client\"" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.931185 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"etcd-serving-ca\"" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.931869 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"config\"" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.931929 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console/console-64d44f6ddf-np6kd"] Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.932974 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"encryption-config-1\"" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.932997 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"image-import-ca\"" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.933231 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"kube-root-ca.crt\"" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.933311 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"serving-cert\"" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.934600 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-w9pnf"] Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.934754 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-54c688565-qwk4x" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.934942 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-64d44f6ddf-np6kd" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.935507 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"openshift-apiserver-sa-dockercfg-4zqgh\"" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.935653 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-root-ca.crt\"" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.935761 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-rbac-proxy\"" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.935762 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-images\"" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.937039 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"openshift-service-ca.crt\"" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.937815 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-bvv8g"] Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.938215 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-global-ca\"" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.938527 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-dockercfg-6n5ln\"" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.938852 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-tls\"" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.940219 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"openshift-service-ca.crt\"" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.940256 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-wspns"] Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.940227 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"openshift-service-ca.crt\"" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.940349 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"service-ca\"" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.940381 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-8596bd845d-w9pnf" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.940427 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-config\"" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.940486 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"oauth-serving-cert\"" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.940926 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-bvv8g" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.940974 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"kube-root-ca.crt\"" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.941069 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-dockercfg-8dkm8\"" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.941288 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-oauth-config\"" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.941587 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"console-config\"" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.941663 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-serving-cert\"" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.941771 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-tls\"" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.941823 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-root-ca.crt\"" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.941887 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-sa-dockercfg-wzhvk\"" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.941895 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-rbac-proxy\"" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.942838 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-hdk9b"] Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.943289 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-wspns" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.944473 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"trusted-ca-bundle\"" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.946987 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"openshift-service-ca.crt\"" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.947488 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"encryption-config-1\"" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.947836 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"openshift-service-ca.crt\"" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.948352 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"trusted-ca-bundle\"" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.948566 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-zsrsd"] Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.948794 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-serving-ca\"" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.949023 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-client\"" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.949431 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"kube-root-ca.crt\"" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.949996 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"oauth-apiserver-sa-dockercfg-qqw4z\"" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.950026 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"audit-1\"" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.950043 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"kube-root-ca.crt\"" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.951212 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"serving-cert\"" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.951267 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-hdk9b" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.951297 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"serving-cert\"" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.952123 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"openshift-service-ca.crt\"" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.952828 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-mmcpt\"" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.953295 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"samples-operator-tls\"" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.955213 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"config\"" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.956905 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"client-ca\"" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.956992 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-67c89758df-cjjqx"] Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.957027 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"kube-root-ca.crt\"" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.957144 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-5777786469-zsrsd" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.957614 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"installation-pull-secrets\"" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.959279 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"trusted-ca-bundle\"" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.959941 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"cluster-samples-operator-dockercfg-jmhxf\"" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.961448 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"openshift-service-ca.crt\"" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.961689 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-tls\"" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.961846 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-6w67b\"" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.962568 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-747b44746d-hrj8v"] Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.962724 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-67c89758df-cjjqx" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.965616 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"openshift-config-operator-dockercfg-sjn6s\"" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.965883 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"console-operator-config\"" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.965957 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-blzxz"] Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.966137 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"config-operator-serving-cert\"" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.966576 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"kube-root-ca.crt\"" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.966607 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"kube-root-ca.crt\"" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.966670 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"openshift-service-ca.crt\"" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.966924 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-747b44746d-hrj8v" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.967073 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"console-operator-dockercfg-kl6m8\"" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.967436 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"serving-cert\"" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.968536 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-dzxhh"] Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.969153 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"default-dockercfg-mdwwj\"" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.969220 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-blzxz" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.969373 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"trusted-ca\"" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.973034 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-login\"" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.973060 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-serving-cert\"" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.973062 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-provider-selection\"" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.973168 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-router-certs\"" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.973561 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-service-ca\"" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.974322 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-error\"" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.974680 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b2a054ba-6a30-47c2-b042-8e859282af9c-etcd-client\") pod \"apiserver-8596bd845d-w9pnf\" (UID: \"b2a054ba-6a30-47c2-b042-8e859282af9c\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-w9pnf" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.974711 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4ec1fbc1-dd55-49ef-b374-28698de88e40-audit-dir\") pod \"apiserver-9ddfb9f55-5k7p6\" (UID: \"4ec1fbc1-dd55-49ef-b374-28698de88e40\") " pod="openshift-apiserver/apiserver-9ddfb9f55-5k7p6" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.974768 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/854f71d2-1dd7-45d6-b368-e879d3a14f59-config\") pod \"console-operator-67c89758df-cjjqx\" (UID: \"854f71d2-1dd7-45d6-b368-e879d3a14f59\") " pod="openshift-console-operator/console-operator-67c89758df-cjjqx" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.974821 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-session\"" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.974821 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d2d38bed-cc7f-4c81-a918-78814a48a49f-service-ca\") pod \"console-64d44f6ddf-np6kd\" (UID: \"d2d38bed-cc7f-4c81-a918-78814a48a49f\") " pod="openshift-console/console-64d44f6ddf-np6kd" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.974862 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/4ec1fbc1-dd55-49ef-b374-28698de88e40-encryption-config\") pod \"apiserver-9ddfb9f55-5k7p6\" (UID: \"4ec1fbc1-dd55-49ef-b374-28698de88e40\") " pod="openshift-apiserver/apiserver-9ddfb9f55-5k7p6" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.974885 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/4ec1fbc1-dd55-49ef-b374-28698de88e40-audit\") pod \"apiserver-9ddfb9f55-5k7p6\" (UID: \"4ec1fbc1-dd55-49ef-b374-28698de88e40\") " pod="openshift-apiserver/apiserver-9ddfb9f55-5k7p6" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.974904 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d2d38bed-cc7f-4c81-a918-78814a48a49f-console-serving-cert\") pod \"console-64d44f6ddf-np6kd\" (UID: \"d2d38bed-cc7f-4c81-a918-78814a48a49f\") " pod="openshift-console/console-64d44f6ddf-np6kd" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.974923 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmwh7\" (UniqueName: \"kubernetes.io/projected/b2a054ba-6a30-47c2-b042-8e859282af9c-kube-api-access-vmwh7\") pod \"apiserver-8596bd845d-w9pnf\" (UID: \"b2a054ba-6a30-47c2-b042-8e859282af9c\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-w9pnf" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.974948 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/4ec1fbc1-dd55-49ef-b374-28698de88e40-image-import-ca\") pod \"apiserver-9ddfb9f55-5k7p6\" (UID: \"4ec1fbc1-dd55-49ef-b374-28698de88e40\") " pod="openshift-apiserver/apiserver-9ddfb9f55-5k7p6" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.975017 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a0eab168-419a-4cb1-b318-244a89a1af5e-client-ca\") pod \"route-controller-manager-776cdc94d6-wspns\" (UID: \"a0eab168-419a-4cb1-b318-244a89a1af5e\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-wspns" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.975062 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d2d38bed-cc7f-4c81-a918-78814a48a49f-console-oauth-config\") pod \"console-64d44f6ddf-np6kd\" (UID: \"d2d38bed-cc7f-4c81-a918-78814a48a49f\") " pod="openshift-console/console-64d44f6ddf-np6kd" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.975111 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/4ec1fbc1-dd55-49ef-b374-28698de88e40-node-pullsecrets\") pod \"apiserver-9ddfb9f55-5k7p6\" (UID: \"4ec1fbc1-dd55-49ef-b374-28698de88e40\") " pod="openshift-apiserver/apiserver-9ddfb9f55-5k7p6" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.975158 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b6da6d66-adc4-4cd5-968f-21877a7820f0-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-tx2lf\" (UID: \"b6da6d66-adc4-4cd5-968f-21877a7820f0\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-tx2lf" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.975180 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/8e90fae7-1eff-4924-ba9c-a1325c4099e9-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-ztlbz\" (UID: \"8e90fae7-1eff-4924-ba9c-a1325c4099e9\") " pod="openshift-machine-api/machine-api-operator-755bb95488-ztlbz" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.975203 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a0eab168-419a-4cb1-b318-244a89a1af5e-serving-cert\") pod \"route-controller-manager-776cdc94d6-wspns\" (UID: \"a0eab168-419a-4cb1-b318-244a89a1af5e\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-wspns" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.975223 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a0eab168-419a-4cb1-b318-244a89a1af5e-tmp\") pod \"route-controller-manager-776cdc94d6-wspns\" (UID: \"a0eab168-419a-4cb1-b318-244a89a1af5e\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-wspns" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.975253 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/26503754-c774-4a77-8b46-f1bd96f096b4-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-bvv8g\" (UID: \"26503754-c774-4a77-8b46-f1bd96f096b4\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-bvv8g" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.975349 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x76zx\" (UniqueName: \"kubernetes.io/projected/4ec1fbc1-dd55-49ef-b374-28698de88e40-kube-api-access-x76zx\") pod \"apiserver-9ddfb9f55-5k7p6\" (UID: \"4ec1fbc1-dd55-49ef-b374-28698de88e40\") " pod="openshift-apiserver/apiserver-9ddfb9f55-5k7p6" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.975374 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b6da6d66-adc4-4cd5-968f-21877a7820f0-client-ca\") pod \"controller-manager-65b6cccf98-tx2lf\" (UID: \"b6da6d66-adc4-4cd5-968f-21877a7820f0\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-tx2lf" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.975426 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b6da6d66-adc4-4cd5-968f-21877a7820f0-serving-cert\") pod \"controller-manager-65b6cccf98-tx2lf\" (UID: \"b6da6d66-adc4-4cd5-968f-21877a7820f0\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-tx2lf" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.975462 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b2a054ba-6a30-47c2-b042-8e859282af9c-trusted-ca-bundle\") pod \"apiserver-8596bd845d-w9pnf\" (UID: \"b2a054ba-6a30-47c2-b042-8e859282af9c\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-w9pnf" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.975512 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e90fae7-1eff-4924-ba9c-a1325c4099e9-config\") pod \"machine-api-operator-755bb95488-ztlbz\" (UID: \"8e90fae7-1eff-4924-ba9c-a1325c4099e9\") " pod="openshift-machine-api/machine-api-operator-755bb95488-ztlbz" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.975542 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/854f71d2-1dd7-45d6-b368-e879d3a14f59-trusted-ca\") pod \"console-operator-67c89758df-cjjqx\" (UID: \"854f71d2-1dd7-45d6-b368-e879d3a14f59\") " pod="openshift-console-operator/console-operator-67c89758df-cjjqx" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.975605 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86pzf\" (UniqueName: \"kubernetes.io/projected/d2d38bed-cc7f-4c81-a918-78814a48a49f-kube-api-access-86pzf\") pod \"console-64d44f6ddf-np6kd\" (UID: \"d2d38bed-cc7f-4c81-a918-78814a48a49f\") " pod="openshift-console/console-64d44f6ddf-np6kd" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.975631 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4149b83c-6a14-4f2f-b097-e59fcb47b122-config\") pod \"machine-approver-54c688565-qwk4x\" (UID: \"4149b83c-6a14-4f2f-b097-e59fcb47b122\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-qwk4x" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.975652 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/854f71d2-1dd7-45d6-b368-e879d3a14f59-serving-cert\") pod \"console-operator-67c89758df-cjjqx\" (UID: \"854f71d2-1dd7-45d6-b368-e879d3a14f59\") " pod="openshift-console-operator/console-operator-67c89758df-cjjqx" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.975674 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2vgg\" (UniqueName: \"kubernetes.io/projected/854f71d2-1dd7-45d6-b368-e879d3a14f59-kube-api-access-n2vgg\") pod \"console-operator-67c89758df-cjjqx\" (UID: \"854f71d2-1dd7-45d6-b368-e879d3a14f59\") " pod="openshift-console-operator/console-operator-67c89758df-cjjqx" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.975699 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ec1fbc1-dd55-49ef-b374-28698de88e40-config\") pod \"apiserver-9ddfb9f55-5k7p6\" (UID: \"4ec1fbc1-dd55-49ef-b374-28698de88e40\") " pod="openshift-apiserver/apiserver-9ddfb9f55-5k7p6" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.975754 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0eab168-419a-4cb1-b318-244a89a1af5e-config\") pod \"route-controller-manager-776cdc94d6-wspns\" (UID: \"a0eab168-419a-4cb1-b318-244a89a1af5e\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-wspns" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.975775 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rw9gl\" (UniqueName: \"kubernetes.io/projected/b6da6d66-adc4-4cd5-968f-21877a7820f0-kube-api-access-rw9gl\") pod \"controller-manager-65b6cccf98-tx2lf\" (UID: \"b6da6d66-adc4-4cd5-968f-21877a7820f0\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-tx2lf" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.975792 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b2a054ba-6a30-47c2-b042-8e859282af9c-etcd-serving-ca\") pod \"apiserver-8596bd845d-w9pnf\" (UID: \"b2a054ba-6a30-47c2-b042-8e859282af9c\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-w9pnf" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.975820 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/4ec1fbc1-dd55-49ef-b374-28698de88e40-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-5k7p6\" (UID: \"4ec1fbc1-dd55-49ef-b374-28698de88e40\") " pod="openshift-apiserver/apiserver-9ddfb9f55-5k7p6" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.975835 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpxj4\" (UniqueName: \"kubernetes.io/projected/4149b83c-6a14-4f2f-b097-e59fcb47b122-kube-api-access-xpxj4\") pod \"machine-approver-54c688565-qwk4x\" (UID: \"4149b83c-6a14-4f2f-b097-e59fcb47b122\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-qwk4x" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.975849 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gq8pl\" (UniqueName: \"kubernetes.io/projected/26503754-c774-4a77-8b46-f1bd96f096b4-kube-api-access-gq8pl\") pod \"cluster-samples-operator-6b564684c8-bvv8g\" (UID: \"26503754-c774-4a77-8b46-f1bd96f096b4\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-bvv8g" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.975896 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4ec1fbc1-dd55-49ef-b374-28698de88e40-serving-cert\") pod \"apiserver-9ddfb9f55-5k7p6\" (UID: \"4ec1fbc1-dd55-49ef-b374-28698de88e40\") " pod="openshift-apiserver/apiserver-9ddfb9f55-5k7p6" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.975923 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8e90fae7-1eff-4924-ba9c-a1325c4099e9-images\") pod \"machine-api-operator-755bb95488-ztlbz\" (UID: \"8e90fae7-1eff-4924-ba9c-a1325c4099e9\") " pod="openshift-machine-api/machine-api-operator-755bb95488-ztlbz" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.975948 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b6da6d66-adc4-4cd5-968f-21877a7820f0-tmp\") pod \"controller-manager-65b6cccf98-tx2lf\" (UID: \"b6da6d66-adc4-4cd5-968f-21877a7820f0\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-tx2lf" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.975973 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b2a054ba-6a30-47c2-b042-8e859282af9c-audit-policies\") pod \"apiserver-8596bd845d-w9pnf\" (UID: \"b2a054ba-6a30-47c2-b042-8e859282af9c\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-w9pnf" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.976004 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d2d38bed-cc7f-4c81-a918-78814a48a49f-trusted-ca-bundle\") pod \"console-64d44f6ddf-np6kd\" (UID: \"d2d38bed-cc7f-4c81-a918-78814a48a49f\") " pod="openshift-console/console-64d44f6ddf-np6kd" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.976044 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4ec1fbc1-dd55-49ef-b374-28698de88e40-etcd-client\") pod \"apiserver-9ddfb9f55-5k7p6\" (UID: \"4ec1fbc1-dd55-49ef-b374-28698de88e40\") " pod="openshift-apiserver/apiserver-9ddfb9f55-5k7p6" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.976060 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/4149b83c-6a14-4f2f-b097-e59fcb47b122-machine-approver-tls\") pod \"machine-approver-54c688565-qwk4x\" (UID: \"4149b83c-6a14-4f2f-b097-e59fcb47b122\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-qwk4x" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.976087 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d2d38bed-cc7f-4c81-a918-78814a48a49f-console-config\") pod \"console-64d44f6ddf-np6kd\" (UID: \"d2d38bed-cc7f-4c81-a918-78814a48a49f\") " pod="openshift-console/console-64d44f6ddf-np6kd" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.976101 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b2a054ba-6a30-47c2-b042-8e859282af9c-serving-cert\") pod \"apiserver-8596bd845d-w9pnf\" (UID: \"b2a054ba-6a30-47c2-b042-8e859282af9c\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-w9pnf" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.976120 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4149b83c-6a14-4f2f-b097-e59fcb47b122-auth-proxy-config\") pod \"machine-approver-54c688565-qwk4x\" (UID: \"4149b83c-6a14-4f2f-b097-e59fcb47b122\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-qwk4x" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.976135 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7f865\" (UniqueName: \"kubernetes.io/projected/a0eab168-419a-4cb1-b318-244a89a1af5e-kube-api-access-7f865\") pod \"route-controller-manager-776cdc94d6-wspns\" (UID: \"a0eab168-419a-4cb1-b318-244a89a1af5e\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-wspns" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.976150 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d2d38bed-cc7f-4c81-a918-78814a48a49f-oauth-serving-cert\") pod \"console-64d44f6ddf-np6kd\" (UID: \"d2d38bed-cc7f-4c81-a918-78814a48a49f\") " pod="openshift-console/console-64d44f6ddf-np6kd" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.976170 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4ec1fbc1-dd55-49ef-b374-28698de88e40-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-5k7p6\" (UID: \"4ec1fbc1-dd55-49ef-b374-28698de88e40\") " pod="openshift-apiserver/apiserver-9ddfb9f55-5k7p6" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.976183 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b2a054ba-6a30-47c2-b042-8e859282af9c-audit-dir\") pod \"apiserver-8596bd845d-w9pnf\" (UID: \"b2a054ba-6a30-47c2-b042-8e859282af9c\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-w9pnf" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.976200 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"oauth-openshift-dockercfg-d2bf2\"" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.976214 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkknk\" (UniqueName: \"kubernetes.io/projected/8e90fae7-1eff-4924-ba9c-a1325c4099e9-kube-api-access-rkknk\") pod \"machine-api-operator-755bb95488-ztlbz\" (UID: \"8e90fae7-1eff-4924-ba9c-a1325c4099e9\") " pod="openshift-machine-api/machine-api-operator-755bb95488-ztlbz" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.976231 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6da6d66-adc4-4cd5-968f-21877a7820f0-config\") pod \"controller-manager-65b6cccf98-tx2lf\" (UID: \"b6da6d66-adc4-4cd5-968f-21877a7820f0\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-tx2lf" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.976247 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b2a054ba-6a30-47c2-b042-8e859282af9c-encryption-config\") pod \"apiserver-8596bd845d-w9pnf\" (UID: \"b2a054ba-6a30-47c2-b042-8e859282af9c\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-w9pnf" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.978050 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"trusted-ca\"" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.978033 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"kube-root-ca.crt\"" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.978599 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-cliconfig\"" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.978740 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"openshift-service-ca.crt\"" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.978911 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-idp-0-file-data\"" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.978966 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"audit\"" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.981797 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-wz7d2"] Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.982347 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-dzxhh" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.988197 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"ingress-operator-dockercfg-74nwh\"" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.988310 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-trusted-ca-bundle\"" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.988953 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-fmnpk"] Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.989061 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-wz7d2" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.992429 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"openshift-service-ca.crt\"" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.995066 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"kube-root-ca.crt\"" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.995317 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"metrics-tls\"" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.997915 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"trusted-ca\"" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.999164 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"kube-root-ca.crt\"" Dec 12 14:12:33 crc kubenswrapper[5108]: I1212 14:12:33.999191 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-config\"" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.002163 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-qwvxc"] Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.002347 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-fmnpk" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.002399 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-service-ca.crt\"" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.002936 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-ocp-branding-template\"" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.006778 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-tx2lf"] Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.006823 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-r8lf9"] Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.007199 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-qwvxc" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.009834 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-cljsg"] Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.009975 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-69b85846b6-r8lf9" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.012211 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-68cf44c8b8-lcqd6"] Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.016692 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-6dfcz"] Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.016753 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-cljsg" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.016773 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-68cf44c8b8-lcqd6" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.020398 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-8gpql"] Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.020514 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-6dfcz" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.022734 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-dockercfg-6c46w\"" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.023397 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-87mjz"] Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.024588 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-8gpql" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.028796 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-bjwlp"] Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.029735 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-799b87ffcd-87mjz" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.032106 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-9dtpr"] Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.032275 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-bjwlp" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.034553 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-kd9gm"] Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.034738 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-9dtpr" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.036869 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-msx9x"] Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.037011 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-kd9gm" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.043214 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-k6vgk"] Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.043459 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-msx9x" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.044638 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-serving-cert\"" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.047258 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-l5t96"] Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.047339 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-69db94689b-k6vgk" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.050028 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-g7zm7"] Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.050121 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-l5t96" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.053012 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-77nhq"] Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.053100 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-g7zm7" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.056281 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29425800-msrsb"] Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.056347 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-77nhq" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.058976 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-c7wpt"] Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.059164 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29425800-msrsb" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.072314 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-7tkh8"] Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.073375 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"service-ca-bundle\"" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.073805 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-c7wpt" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.077882 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-n2vgg\" (UniqueName: \"kubernetes.io/projected/854f71d2-1dd7-45d6-b368-e879d3a14f59-kube-api-access-n2vgg\") pod \"console-operator-67c89758df-cjjqx\" (UID: \"854f71d2-1dd7-45d6-b368-e879d3a14f59\") " pod="openshift-console-operator/console-operator-67c89758df-cjjqx" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.078097 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/36f48e3e-03a0-42fc-ab1d-37c77fb10f65-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-dzxhh\" (UID: \"36f48e3e-03a0-42fc-ab1d-37c77fb10f65\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-dzxhh" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.078203 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26-audit-policies\") pod \"oauth-openshift-66458b6674-blzxz\" (UID: \"9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26\") " pod="openshift-authentication/oauth-openshift-66458b6674-blzxz" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.078294 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ec1fbc1-dd55-49ef-b374-28698de88e40-config\") pod \"apiserver-9ddfb9f55-5k7p6\" (UID: \"4ec1fbc1-dd55-49ef-b374-28698de88e40\") " pod="openshift-apiserver/apiserver-9ddfb9f55-5k7p6" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.078373 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-blzxz\" (UID: \"9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26\") " pod="openshift-authentication/oauth-openshift-66458b6674-blzxz" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.078471 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0eab168-419a-4cb1-b318-244a89a1af5e-config\") pod \"route-controller-manager-776cdc94d6-wspns\" (UID: \"a0eab168-419a-4cb1-b318-244a89a1af5e\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-wspns" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.078557 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rw9gl\" (UniqueName: \"kubernetes.io/projected/b6da6d66-adc4-4cd5-968f-21877a7820f0-kube-api-access-rw9gl\") pod \"controller-manager-65b6cccf98-tx2lf\" (UID: \"b6da6d66-adc4-4cd5-968f-21877a7820f0\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-tx2lf" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.078628 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b2a054ba-6a30-47c2-b042-8e859282af9c-etcd-serving-ca\") pod \"apiserver-8596bd845d-w9pnf\" (UID: \"b2a054ba-6a30-47c2-b042-8e859282af9c\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-w9pnf" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.078704 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/36f48e3e-03a0-42fc-ab1d-37c77fb10f65-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-dzxhh\" (UID: \"36f48e3e-03a0-42fc-ab1d-37c77fb10f65\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-dzxhh" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.078783 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/4ec1fbc1-dd55-49ef-b374-28698de88e40-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-5k7p6\" (UID: \"4ec1fbc1-dd55-49ef-b374-28698de88e40\") " pod="openshift-apiserver/apiserver-9ddfb9f55-5k7p6" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.078854 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xpxj4\" (UniqueName: \"kubernetes.io/projected/4149b83c-6a14-4f2f-b097-e59fcb47b122-kube-api-access-xpxj4\") pod \"machine-approver-54c688565-qwk4x\" (UID: \"4149b83c-6a14-4f2f-b097-e59fcb47b122\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-qwk4x" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.078934 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gq8pl\" (UniqueName: \"kubernetes.io/projected/26503754-c774-4a77-8b46-f1bd96f096b4-kube-api-access-gq8pl\") pod \"cluster-samples-operator-6b564684c8-bvv8g\" (UID: \"26503754-c774-4a77-8b46-f1bd96f096b4\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-bvv8g" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.079010 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4ec1fbc1-dd55-49ef-b374-28698de88e40-serving-cert\") pod \"apiserver-9ddfb9f55-5k7p6\" (UID: \"4ec1fbc1-dd55-49ef-b374-28698de88e40\") " pod="openshift-apiserver/apiserver-9ddfb9f55-5k7p6" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.079111 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8e90fae7-1eff-4924-ba9c-a1325c4099e9-images\") pod \"machine-api-operator-755bb95488-ztlbz\" (UID: \"8e90fae7-1eff-4924-ba9c-a1325c4099e9\") " pod="openshift-machine-api/machine-api-operator-755bb95488-ztlbz" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.079161 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ec1fbc1-dd55-49ef-b374-28698de88e40-config\") pod \"apiserver-9ddfb9f55-5k7p6\" (UID: \"4ec1fbc1-dd55-49ef-b374-28698de88e40\") " pod="openshift-apiserver/apiserver-9ddfb9f55-5k7p6" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.079244 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b6da6d66-adc4-4cd5-968f-21877a7820f0-tmp\") pod \"controller-manager-65b6cccf98-tx2lf\" (UID: \"b6da6d66-adc4-4cd5-968f-21877a7820f0\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-tx2lf" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.079331 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b2a054ba-6a30-47c2-b042-8e859282af9c-audit-policies\") pod \"apiserver-8596bd845d-w9pnf\" (UID: \"b2a054ba-6a30-47c2-b042-8e859282af9c\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-w9pnf" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.079414 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/78c92fa7-6dbe-4fef-8495-6dc6fe162b22-installation-pull-secrets\") pod \"image-registry-66587d64c8-hdk9b\" (UID: \"78c92fa7-6dbe-4fef-8495-6dc6fe162b22\") " pod="openshift-image-registry/image-registry-66587d64c8-hdk9b" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.079527 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvjfz\" (UniqueName: \"kubernetes.io/projected/36f48e3e-03a0-42fc-ab1d-37c77fb10f65-kube-api-access-vvjfz\") pod \"ingress-operator-6b9cb4dbcf-dzxhh\" (UID: \"36f48e3e-03a0-42fc-ab1d-37c77fb10f65\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-dzxhh" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.079637 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d2d38bed-cc7f-4c81-a918-78814a48a49f-trusted-ca-bundle\") pod \"console-64d44f6ddf-np6kd\" (UID: \"d2d38bed-cc7f-4c81-a918-78814a48a49f\") " pod="openshift-console/console-64d44f6ddf-np6kd" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.079746 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-blzxz\" (UID: \"9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26\") " pod="openshift-authentication/oauth-openshift-66458b6674-blzxz" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.079419 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0eab168-419a-4cb1-b318-244a89a1af5e-config\") pod \"route-controller-manager-776cdc94d6-wspns\" (UID: \"a0eab168-419a-4cb1-b318-244a89a1af5e\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-wspns" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.080014 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-blzxz\" (UID: \"9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26\") " pod="openshift-authentication/oauth-openshift-66458b6674-blzxz" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.080122 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/78c92fa7-6dbe-4fef-8495-6dc6fe162b22-registry-tls\") pod \"image-registry-66587d64c8-hdk9b\" (UID: \"78c92fa7-6dbe-4fef-8495-6dc6fe162b22\") " pod="openshift-image-registry/image-registry-66587d64c8-hdk9b" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.080195 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4ec1fbc1-dd55-49ef-b374-28698de88e40-etcd-client\") pod \"apiserver-9ddfb9f55-5k7p6\" (UID: \"4ec1fbc1-dd55-49ef-b374-28698de88e40\") " pod="openshift-apiserver/apiserver-9ddfb9f55-5k7p6" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.080237 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/4149b83c-6a14-4f2f-b097-e59fcb47b122-machine-approver-tls\") pod \"machine-approver-54c688565-qwk4x\" (UID: \"4149b83c-6a14-4f2f-b097-e59fcb47b122\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-qwk4x" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.080391 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d2d38bed-cc7f-4c81-a918-78814a48a49f-console-config\") pod \"console-64d44f6ddf-np6kd\" (UID: \"d2d38bed-cc7f-4c81-a918-78814a48a49f\") " pod="openshift-console/console-64d44f6ddf-np6kd" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.080458 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b2a054ba-6a30-47c2-b042-8e859282af9c-serving-cert\") pod \"apiserver-8596bd845d-w9pnf\" (UID: \"b2a054ba-6a30-47c2-b042-8e859282af9c\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-w9pnf" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.080514 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/78c92fa7-6dbe-4fef-8495-6dc6fe162b22-registry-certificates\") pod \"image-registry-66587d64c8-hdk9b\" (UID: \"78c92fa7-6dbe-4fef-8495-6dc6fe162b22\") " pod="openshift-image-registry/image-registry-66587d64c8-hdk9b" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.080558 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4149b83c-6a14-4f2f-b097-e59fcb47b122-auth-proxy-config\") pod \"machine-approver-54c688565-qwk4x\" (UID: \"4149b83c-6a14-4f2f-b097-e59fcb47b122\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-qwk4x" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.080618 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7f865\" (UniqueName: \"kubernetes.io/projected/a0eab168-419a-4cb1-b318-244a89a1af5e-kube-api-access-7f865\") pod \"route-controller-manager-776cdc94d6-wspns\" (UID: \"a0eab168-419a-4cb1-b318-244a89a1af5e\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-wspns" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.080636 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b6da6d66-adc4-4cd5-968f-21877a7820f0-tmp\") pod \"controller-manager-65b6cccf98-tx2lf\" (UID: \"b6da6d66-adc4-4cd5-968f-21877a7820f0\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-tx2lf" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.080646 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d2d38bed-cc7f-4c81-a918-78814a48a49f-oauth-serving-cert\") pod \"console-64d44f6ddf-np6kd\" (UID: \"d2d38bed-cc7f-4c81-a918-78814a48a49f\") " pod="openshift-console/console-64d44f6ddf-np6kd" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.080709 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pm65j\" (UniqueName: \"kubernetes.io/projected/a66581ea-fc96-4aba-9332-566bb17c7b71-kube-api-access-pm65j\") pod \"openshift-config-operator-5777786469-zsrsd\" (UID: \"a66581ea-fc96-4aba-9332-566bb17c7b71\") " pod="openshift-config-operator/openshift-config-operator-5777786469-zsrsd" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.080770 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4ec1fbc1-dd55-49ef-b374-28698de88e40-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-5k7p6\" (UID: \"4ec1fbc1-dd55-49ef-b374-28698de88e40\") " pod="openshift-apiserver/apiserver-9ddfb9f55-5k7p6" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.080800 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b2a054ba-6a30-47c2-b042-8e859282af9c-audit-dir\") pod \"apiserver-8596bd845d-w9pnf\" (UID: \"b2a054ba-6a30-47c2-b042-8e859282af9c\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-w9pnf" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.080828 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rltpf\" (UniqueName: \"kubernetes.io/projected/29c8dcea-f999-4e33-9f5e-ef9eb8a423f7-kube-api-access-rltpf\") pod \"downloads-747b44746d-hrj8v\" (UID: \"29c8dcea-f999-4e33-9f5e-ef9eb8a423f7\") " pod="openshift-console/downloads-747b44746d-hrj8v" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.081174 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rkknk\" (UniqueName: \"kubernetes.io/projected/8e90fae7-1eff-4924-ba9c-a1325c4099e9-kube-api-access-rkknk\") pod \"machine-api-operator-755bb95488-ztlbz\" (UID: \"8e90fae7-1eff-4924-ba9c-a1325c4099e9\") " pod="openshift-machine-api/machine-api-operator-755bb95488-ztlbz" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.081196 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6da6d66-adc4-4cd5-968f-21877a7820f0-config\") pod \"controller-manager-65b6cccf98-tx2lf\" (UID: \"b6da6d66-adc4-4cd5-968f-21877a7820f0\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-tx2lf" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.081218 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hdk9b\" (UID: \"78c92fa7-6dbe-4fef-8495-6dc6fe162b22\") " pod="openshift-image-registry/image-registry-66587d64c8-hdk9b" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.081236 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/78c92fa7-6dbe-4fef-8495-6dc6fe162b22-trusted-ca\") pod \"image-registry-66587d64c8-hdk9b\" (UID: \"78c92fa7-6dbe-4fef-8495-6dc6fe162b22\") " pod="openshift-image-registry/image-registry-66587d64c8-hdk9b" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.081259 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b2a054ba-6a30-47c2-b042-8e859282af9c-encryption-config\") pod \"apiserver-8596bd845d-w9pnf\" (UID: \"b2a054ba-6a30-47c2-b042-8e859282af9c\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-w9pnf" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.081276 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-blzxz\" (UID: \"9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26\") " pod="openshift-authentication/oauth-openshift-66458b6674-blzxz" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.081305 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b2a054ba-6a30-47c2-b042-8e859282af9c-etcd-client\") pod \"apiserver-8596bd845d-w9pnf\" (UID: \"b2a054ba-6a30-47c2-b042-8e859282af9c\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-w9pnf" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.081324 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-blzxz\" (UID: \"9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26\") " pod="openshift-authentication/oauth-openshift-66458b6674-blzxz" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.081345 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4ec1fbc1-dd55-49ef-b374-28698de88e40-audit-dir\") pod \"apiserver-9ddfb9f55-5k7p6\" (UID: \"4ec1fbc1-dd55-49ef-b374-28698de88e40\") " pod="openshift-apiserver/apiserver-9ddfb9f55-5k7p6" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.081350 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d2d38bed-cc7f-4c81-a918-78814a48a49f-trusted-ca-bundle\") pod \"console-64d44f6ddf-np6kd\" (UID: \"d2d38bed-cc7f-4c81-a918-78814a48a49f\") " pod="openshift-console/console-64d44f6ddf-np6kd" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.081506 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b2a054ba-6a30-47c2-b042-8e859282af9c-audit-policies\") pod \"apiserver-8596bd845d-w9pnf\" (UID: \"b2a054ba-6a30-47c2-b042-8e859282af9c\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-w9pnf" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.081842 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-hzphx"] Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.081984 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d2d38bed-cc7f-4c81-a918-78814a48a49f-console-config\") pod \"console-64d44f6ddf-np6kd\" (UID: \"d2d38bed-cc7f-4c81-a918-78814a48a49f\") " pod="openshift-console/console-64d44f6ddf-np6kd" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.082048 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b2a054ba-6a30-47c2-b042-8e859282af9c-etcd-serving-ca\") pod \"apiserver-8596bd845d-w9pnf\" (UID: \"b2a054ba-6a30-47c2-b042-8e859282af9c\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-w9pnf" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.082170 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d2d38bed-cc7f-4c81-a918-78814a48a49f-oauth-serving-cert\") pod \"console-64d44f6ddf-np6kd\" (UID: \"d2d38bed-cc7f-4c81-a918-78814a48a49f\") " pod="openshift-console/console-64d44f6ddf-np6kd" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.082185 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/854f71d2-1dd7-45d6-b368-e879d3a14f59-config\") pod \"console-operator-67c89758df-cjjqx\" (UID: \"854f71d2-1dd7-45d6-b368-e879d3a14f59\") " pod="openshift-console-operator/console-operator-67c89758df-cjjqx" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.082240 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-dockercfg-6tbpn\"" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.081362 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/854f71d2-1dd7-45d6-b368-e879d3a14f59-config\") pod \"console-operator-67c89758df-cjjqx\" (UID: \"854f71d2-1dd7-45d6-b368-e879d3a14f59\") " pod="openshift-console-operator/console-operator-67c89758df-cjjqx" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.082781 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4149b83c-6a14-4f2f-b097-e59fcb47b122-auth-proxy-config\") pod \"machine-approver-54c688565-qwk4x\" (UID: \"4149b83c-6a14-4f2f-b097-e59fcb47b122\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-qwk4x" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.083039 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8e90fae7-1eff-4924-ba9c-a1325c4099e9-images\") pod \"machine-api-operator-755bb95488-ztlbz\" (UID: \"8e90fae7-1eff-4924-ba9c-a1325c4099e9\") " pod="openshift-machine-api/machine-api-operator-755bb95488-ztlbz" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.083430 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4ec1fbc1-dd55-49ef-b374-28698de88e40-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-5k7p6\" (UID: \"4ec1fbc1-dd55-49ef-b374-28698de88e40\") " pod="openshift-apiserver/apiserver-9ddfb9f55-5k7p6" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.083502 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b2a054ba-6a30-47c2-b042-8e859282af9c-audit-dir\") pod \"apiserver-8596bd845d-w9pnf\" (UID: \"b2a054ba-6a30-47c2-b042-8e859282af9c\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-w9pnf" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.083564 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d2d38bed-cc7f-4c81-a918-78814a48a49f-service-ca\") pod \"console-64d44f6ddf-np6kd\" (UID: \"d2d38bed-cc7f-4c81-a918-78814a48a49f\") " pod="openshift-console/console-64d44f6ddf-np6kd" Dec 12 14:12:34 crc kubenswrapper[5108]: E1212 14:12:34.083666 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:34.583647461 +0000 UTC m=+107.491638620 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hdk9b" (UID: "78c92fa7-6dbe-4fef-8495-6dc6fe162b22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.083865 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/78c92fa7-6dbe-4fef-8495-6dc6fe162b22-ca-trust-extracted\") pod \"image-registry-66587d64c8-hdk9b\" (UID: \"78c92fa7-6dbe-4fef-8495-6dc6fe162b22\") " pod="openshift-image-registry/image-registry-66587d64c8-hdk9b" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.083925 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/78c92fa7-6dbe-4fef-8495-6dc6fe162b22-bound-sa-token\") pod \"image-registry-66587d64c8-hdk9b\" (UID: \"78c92fa7-6dbe-4fef-8495-6dc6fe162b22\") " pod="openshift-image-registry/image-registry-66587d64c8-hdk9b" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.083954 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/4ec1fbc1-dd55-49ef-b374-28698de88e40-encryption-config\") pod \"apiserver-9ddfb9f55-5k7p6\" (UID: \"4ec1fbc1-dd55-49ef-b374-28698de88e40\") " pod="openshift-apiserver/apiserver-9ddfb9f55-5k7p6" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.084136 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/4ec1fbc1-dd55-49ef-b374-28698de88e40-audit\") pod \"apiserver-9ddfb9f55-5k7p6\" (UID: \"4ec1fbc1-dd55-49ef-b374-28698de88e40\") " pod="openshift-apiserver/apiserver-9ddfb9f55-5k7p6" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.084213 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4ec1fbc1-dd55-49ef-b374-28698de88e40-audit-dir\") pod \"apiserver-9ddfb9f55-5k7p6\" (UID: \"4ec1fbc1-dd55-49ef-b374-28698de88e40\") " pod="openshift-apiserver/apiserver-9ddfb9f55-5k7p6" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.084263 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d2d38bed-cc7f-4c81-a918-78814a48a49f-console-serving-cert\") pod \"console-64d44f6ddf-np6kd\" (UID: \"d2d38bed-cc7f-4c81-a918-78814a48a49f\") " pod="openshift-console/console-64d44f6ddf-np6kd" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.084286 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vmwh7\" (UniqueName: \"kubernetes.io/projected/b2a054ba-6a30-47c2-b042-8e859282af9c-kube-api-access-vmwh7\") pod \"apiserver-8596bd845d-w9pnf\" (UID: \"b2a054ba-6a30-47c2-b042-8e859282af9c\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-w9pnf" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.084311 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/36f48e3e-03a0-42fc-ab1d-37c77fb10f65-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-dzxhh\" (UID: \"36f48e3e-03a0-42fc-ab1d-37c77fb10f65\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-dzxhh" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.084339 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/4ec1fbc1-dd55-49ef-b374-28698de88e40-image-import-ca\") pod \"apiserver-9ddfb9f55-5k7p6\" (UID: \"4ec1fbc1-dd55-49ef-b374-28698de88e40\") " pod="openshift-apiserver/apiserver-9ddfb9f55-5k7p6" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.084364 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a0eab168-419a-4cb1-b318-244a89a1af5e-client-ca\") pod \"route-controller-manager-776cdc94d6-wspns\" (UID: \"a0eab168-419a-4cb1-b318-244a89a1af5e\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-wspns" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.084389 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d2d38bed-cc7f-4c81-a918-78814a48a49f-console-oauth-config\") pod \"console-64d44f6ddf-np6kd\" (UID: \"d2d38bed-cc7f-4c81-a918-78814a48a49f\") " pod="openshift-console/console-64d44f6ddf-np6kd" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.084415 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-blzxz\" (UID: \"9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26\") " pod="openshift-authentication/oauth-openshift-66458b6674-blzxz" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.084441 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/4ec1fbc1-dd55-49ef-b374-28698de88e40-node-pullsecrets\") pod \"apiserver-9ddfb9f55-5k7p6\" (UID: \"4ec1fbc1-dd55-49ef-b374-28698de88e40\") " pod="openshift-apiserver/apiserver-9ddfb9f55-5k7p6" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.084488 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b6da6d66-adc4-4cd5-968f-21877a7820f0-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-tx2lf\" (UID: \"b6da6d66-adc4-4cd5-968f-21877a7820f0\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-tx2lf" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.084511 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/a66581ea-fc96-4aba-9332-566bb17c7b71-available-featuregates\") pod \"openshift-config-operator-5777786469-zsrsd\" (UID: \"a66581ea-fc96-4aba-9332-566bb17c7b71\") " pod="openshift-config-operator/openshift-config-operator-5777786469-zsrsd" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.084532 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6s8q\" (UniqueName: \"kubernetes.io/projected/9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26-kube-api-access-s6s8q\") pod \"oauth-openshift-66458b6674-blzxz\" (UID: \"9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26\") " pod="openshift-authentication/oauth-openshift-66458b6674-blzxz" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.084554 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/8e90fae7-1eff-4924-ba9c-a1325c4099e9-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-ztlbz\" (UID: \"8e90fae7-1eff-4924-ba9c-a1325c4099e9\") " pod="openshift-machine-api/machine-api-operator-755bb95488-ztlbz" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.084575 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a0eab168-419a-4cb1-b318-244a89a1af5e-serving-cert\") pod \"route-controller-manager-776cdc94d6-wspns\" (UID: \"a0eab168-419a-4cb1-b318-244a89a1af5e\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-wspns" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.084595 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a0eab168-419a-4cb1-b318-244a89a1af5e-tmp\") pod \"route-controller-manager-776cdc94d6-wspns\" (UID: \"a0eab168-419a-4cb1-b318-244a89a1af5e\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-wspns" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.084603 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d2d38bed-cc7f-4c81-a918-78814a48a49f-service-ca\") pod \"console-64d44f6ddf-np6kd\" (UID: \"d2d38bed-cc7f-4c81-a918-78814a48a49f\") " pod="openshift-console/console-64d44f6ddf-np6kd" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.084626 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/26503754-c774-4a77-8b46-f1bd96f096b4-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-bvv8g\" (UID: \"26503754-c774-4a77-8b46-f1bd96f096b4\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-bvv8g" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.084657 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a66581ea-fc96-4aba-9332-566bb17c7b71-serving-cert\") pod \"openshift-config-operator-5777786469-zsrsd\" (UID: \"a66581ea-fc96-4aba-9332-566bb17c7b71\") " pod="openshift-config-operator/openshift-config-operator-5777786469-zsrsd" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.084683 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26-audit-dir\") pod \"oauth-openshift-66458b6674-blzxz\" (UID: \"9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26\") " pod="openshift-authentication/oauth-openshift-66458b6674-blzxz" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.084697 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/4ec1fbc1-dd55-49ef-b374-28698de88e40-node-pullsecrets\") pod \"apiserver-9ddfb9f55-5k7p6\" (UID: \"4ec1fbc1-dd55-49ef-b374-28698de88e40\") " pod="openshift-apiserver/apiserver-9ddfb9f55-5k7p6" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.084722 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-x76zx\" (UniqueName: \"kubernetes.io/projected/4ec1fbc1-dd55-49ef-b374-28698de88e40-kube-api-access-x76zx\") pod \"apiserver-9ddfb9f55-5k7p6\" (UID: \"4ec1fbc1-dd55-49ef-b374-28698de88e40\") " pod="openshift-apiserver/apiserver-9ddfb9f55-5k7p6" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.084750 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b6da6d66-adc4-4cd5-968f-21877a7820f0-client-ca\") pod \"controller-manager-65b6cccf98-tx2lf\" (UID: \"b6da6d66-adc4-4cd5-968f-21877a7820f0\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-tx2lf" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.084775 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-blzxz\" (UID: \"9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26\") " pod="openshift-authentication/oauth-openshift-66458b6674-blzxz" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.084809 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b6da6d66-adc4-4cd5-968f-21877a7820f0-serving-cert\") pod \"controller-manager-65b6cccf98-tx2lf\" (UID: \"b6da6d66-adc4-4cd5-968f-21877a7820f0\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-tx2lf" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.084835 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-blzxz\" (UID: \"9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26\") " pod="openshift-authentication/oauth-openshift-66458b6674-blzxz" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.084863 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-blzxz\" (UID: \"9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26\") " pod="openshift-authentication/oauth-openshift-66458b6674-blzxz" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.084894 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b2a054ba-6a30-47c2-b042-8e859282af9c-trusted-ca-bundle\") pod \"apiserver-8596bd845d-w9pnf\" (UID: \"b2a054ba-6a30-47c2-b042-8e859282af9c\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-w9pnf" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.084927 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e90fae7-1eff-4924-ba9c-a1325c4099e9-config\") pod \"machine-api-operator-755bb95488-ztlbz\" (UID: \"8e90fae7-1eff-4924-ba9c-a1325c4099e9\") " pod="openshift-machine-api/machine-api-operator-755bb95488-ztlbz" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.084932 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/4ec1fbc1-dd55-49ef-b374-28698de88e40-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-5k7p6\" (UID: \"4ec1fbc1-dd55-49ef-b374-28698de88e40\") " pod="openshift-apiserver/apiserver-9ddfb9f55-5k7p6" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.084951 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/854f71d2-1dd7-45d6-b368-e879d3a14f59-trusted-ca\") pod \"console-operator-67c89758df-cjjqx\" (UID: \"854f71d2-1dd7-45d6-b368-e879d3a14f59\") " pod="openshift-console-operator/console-operator-67c89758df-cjjqx" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.085011 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-blzxz\" (UID: \"9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26\") " pod="openshift-authentication/oauth-openshift-66458b6674-blzxz" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.085050 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4x8l\" (UniqueName: \"kubernetes.io/projected/78c92fa7-6dbe-4fef-8495-6dc6fe162b22-kube-api-access-b4x8l\") pod \"image-registry-66587d64c8-hdk9b\" (UID: \"78c92fa7-6dbe-4fef-8495-6dc6fe162b22\") " pod="openshift-image-registry/image-registry-66587d64c8-hdk9b" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.085121 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-blzxz\" (UID: \"9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26\") " pod="openshift-authentication/oauth-openshift-66458b6674-blzxz" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.085191 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-86pzf\" (UniqueName: \"kubernetes.io/projected/d2d38bed-cc7f-4c81-a918-78814a48a49f-kube-api-access-86pzf\") pod \"console-64d44f6ddf-np6kd\" (UID: \"d2d38bed-cc7f-4c81-a918-78814a48a49f\") " pod="openshift-console/console-64d44f6ddf-np6kd" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.085226 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4149b83c-6a14-4f2f-b097-e59fcb47b122-config\") pod \"machine-approver-54c688565-qwk4x\" (UID: \"4149b83c-6a14-4f2f-b097-e59fcb47b122\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-qwk4x" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.085250 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/854f71d2-1dd7-45d6-b368-e879d3a14f59-serving-cert\") pod \"console-operator-67c89758df-cjjqx\" (UID: \"854f71d2-1dd7-45d6-b368-e879d3a14f59\") " pod="openshift-console-operator/console-operator-67c89758df-cjjqx" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.085455 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/4ec1fbc1-dd55-49ef-b374-28698de88e40-image-import-ca\") pod \"apiserver-9ddfb9f55-5k7p6\" (UID: \"4ec1fbc1-dd55-49ef-b374-28698de88e40\") " pod="openshift-apiserver/apiserver-9ddfb9f55-5k7p6" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.086468 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/4149b83c-6a14-4f2f-b097-e59fcb47b122-machine-approver-tls\") pod \"machine-approver-54c688565-qwk4x\" (UID: \"4149b83c-6a14-4f2f-b097-e59fcb47b122\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-qwk4x" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.086723 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b2a054ba-6a30-47c2-b042-8e859282af9c-serving-cert\") pod \"apiserver-8596bd845d-w9pnf\" (UID: \"b2a054ba-6a30-47c2-b042-8e859282af9c\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-w9pnf" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.087238 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/4ec1fbc1-dd55-49ef-b374-28698de88e40-audit\") pod \"apiserver-9ddfb9f55-5k7p6\" (UID: \"4ec1fbc1-dd55-49ef-b374-28698de88e40\") " pod="openshift-apiserver/apiserver-9ddfb9f55-5k7p6" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.087290 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b2a054ba-6a30-47c2-b042-8e859282af9c-encryption-config\") pod \"apiserver-8596bd845d-w9pnf\" (UID: \"b2a054ba-6a30-47c2-b042-8e859282af9c\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-w9pnf" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.087379 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6da6d66-adc4-4cd5-968f-21877a7820f0-config\") pod \"controller-manager-65b6cccf98-tx2lf\" (UID: \"b6da6d66-adc4-4cd5-968f-21877a7820f0\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-tx2lf" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.087765 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d2d38bed-cc7f-4c81-a918-78814a48a49f-console-serving-cert\") pod \"console-64d44f6ddf-np6kd\" (UID: \"d2d38bed-cc7f-4c81-a918-78814a48a49f\") " pod="openshift-console/console-64d44f6ddf-np6kd" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.088713 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b6da6d66-adc4-4cd5-968f-21877a7820f0-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-tx2lf\" (UID: \"b6da6d66-adc4-4cd5-968f-21877a7820f0\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-tx2lf" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.088829 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/854f71d2-1dd7-45d6-b368-e879d3a14f59-trusted-ca\") pod \"console-operator-67c89758df-cjjqx\" (UID: \"854f71d2-1dd7-45d6-b368-e879d3a14f59\") " pod="openshift-console-operator/console-operator-67c89758df-cjjqx" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.089113 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-zlssf"] Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.089318 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b2a054ba-6a30-47c2-b042-8e859282af9c-trusted-ca-bundle\") pod \"apiserver-8596bd845d-w9pnf\" (UID: \"b2a054ba-6a30-47c2-b042-8e859282af9c\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-w9pnf" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.089409 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b6da6d66-adc4-4cd5-968f-21877a7820f0-client-ca\") pod \"controller-manager-65b6cccf98-tx2lf\" (UID: \"b6da6d66-adc4-4cd5-968f-21877a7820f0\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-tx2lf" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.089551 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a0eab168-419a-4cb1-b318-244a89a1af5e-client-ca\") pod \"route-controller-manager-776cdc94d6-wspns\" (UID: \"a0eab168-419a-4cb1-b318-244a89a1af5e\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-wspns" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.089569 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-7tkh8" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.089806 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-hzphx" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.090162 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4149b83c-6a14-4f2f-b097-e59fcb47b122-config\") pod \"machine-approver-54c688565-qwk4x\" (UID: \"4149b83c-6a14-4f2f-b097-e59fcb47b122\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-qwk4x" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.090332 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a0eab168-419a-4cb1-b318-244a89a1af5e-tmp\") pod \"route-controller-manager-776cdc94d6-wspns\" (UID: \"a0eab168-419a-4cb1-b318-244a89a1af5e\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-wspns" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.090607 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e90fae7-1eff-4924-ba9c-a1325c4099e9-config\") pod \"machine-api-operator-755bb95488-ztlbz\" (UID: \"8e90fae7-1eff-4924-ba9c-a1325c4099e9\") " pod="openshift-machine-api/machine-api-operator-755bb95488-ztlbz" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.090800 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/4ec1fbc1-dd55-49ef-b374-28698de88e40-encryption-config\") pod \"apiserver-9ddfb9f55-5k7p6\" (UID: \"4ec1fbc1-dd55-49ef-b374-28698de88e40\") " pod="openshift-apiserver/apiserver-9ddfb9f55-5k7p6" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.091935 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d2d38bed-cc7f-4c81-a918-78814a48a49f-console-oauth-config\") pod \"console-64d44f6ddf-np6kd\" (UID: \"d2d38bed-cc7f-4c81-a918-78814a48a49f\") " pod="openshift-console/console-64d44f6ddf-np6kd" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.092780 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4ec1fbc1-dd55-49ef-b374-28698de88e40-serving-cert\") pod \"apiserver-9ddfb9f55-5k7p6\" (UID: \"4ec1fbc1-dd55-49ef-b374-28698de88e40\") " pod="openshift-apiserver/apiserver-9ddfb9f55-5k7p6" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.092841 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/8e90fae7-1eff-4924-ba9c-a1325c4099e9-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-ztlbz\" (UID: \"8e90fae7-1eff-4924-ba9c-a1325c4099e9\") " pod="openshift-machine-api/machine-api-operator-755bb95488-ztlbz" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.092833 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b2a054ba-6a30-47c2-b042-8e859282af9c-etcd-client\") pod \"apiserver-8596bd845d-w9pnf\" (UID: \"b2a054ba-6a30-47c2-b042-8e859282af9c\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-w9pnf" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.092898 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a0eab168-419a-4cb1-b318-244a89a1af5e-serving-cert\") pod \"route-controller-manager-776cdc94d6-wspns\" (UID: \"a0eab168-419a-4cb1-b318-244a89a1af5e\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-wspns" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.093047 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/854f71d2-1dd7-45d6-b368-e879d3a14f59-serving-cert\") pod \"console-operator-67c89758df-cjjqx\" (UID: \"854f71d2-1dd7-45d6-b368-e879d3a14f59\") " pod="openshift-console-operator/console-operator-67c89758df-cjjqx" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.094756 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/26503754-c774-4a77-8b46-f1bd96f096b4-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-bvv8g\" (UID: \"26503754-c774-4a77-8b46-f1bd96f096b4\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-bvv8g" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.095365 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b6da6d66-adc4-4cd5-968f-21877a7820f0-serving-cert\") pod \"controller-manager-65b6cccf98-tx2lf\" (UID: \"b6da6d66-adc4-4cd5-968f-21877a7820f0\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-tx2lf" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.095497 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4ec1fbc1-dd55-49ef-b374-28698de88e40-etcd-client\") pod \"apiserver-9ddfb9f55-5k7p6\" (UID: \"4ec1fbc1-dd55-49ef-b374-28698de88e40\") " pod="openshift-apiserver/apiserver-9ddfb9f55-5k7p6" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.096727 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-8rxm9"] Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.096878 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-zlssf" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.102049 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-74545575db-gdjh7"] Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.102055 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"serving-cert\"" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.102146 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-8rxm9" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.105841 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-wspns"] Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.105936 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-w9pnf"] Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.106035 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-5k7p6"] Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.106091 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-zsrsd"] Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.106109 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-67c89758df-cjjqx"] Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.106122 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-dzxhh"] Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.106131 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-qwvxc"] Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.106140 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-bvv8g"] Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.106149 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-hdk9b"] Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.106158 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-64d44f6ddf-np6kd"] Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.106167 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-wz7d2"] Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.106178 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-87mjz"] Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.106189 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-9dtpr"] Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.106200 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-l5t96"] Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.106211 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-msx9x"] Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.106221 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29425800-msrsb"] Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.106232 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-ztlbz"] Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.106241 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-cljsg"] Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.106253 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-fcvr7"] Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.106013 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-74545575db-gdjh7" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.108771 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-g7zm7"] Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.108797 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-blzxz"] Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.108807 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-r8lf9"] Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.108818 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-8gpql"] Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.108830 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-gw5g9"] Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.108929 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-fcvr7" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.111591 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-fmnpk"] Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.111613 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-747b44746d-hrj8v"] Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.111622 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-6dfcz"] Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.111632 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-77nhq"] Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.111641 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-kd9gm"] Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.111650 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-f6zss"] Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.112112 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-gw5g9" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.114167 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-zlssf"] Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.114188 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-bjwlp"] Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.114197 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-8rxm9"] Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.114206 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-hzphx"] Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.114215 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-gw5g9"] Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.114223 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-k6vgk"] Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.114231 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-7tkh8"] Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.114239 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-74545575db-gdjh7"] Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.114249 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-9bn92"] Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.114323 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-f6zss" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.116417 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-c8mpt"] Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.116873 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-9bn92" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.120013 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-fcvr7"] Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.121136 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-c8mpt" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.121544 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-c7wpt"] Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.121569 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-c8mpt"] Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.122737 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"kube-root-ca.crt\"" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.142400 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"openshift-service-ca.crt\"" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.161508 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-config\"" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.186149 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:34 crc kubenswrapper[5108]: E1212 14:12:34.186323 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:34.686290245 +0000 UTC m=+107.594281404 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.186401 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ae92c75f-face-43b3-8dd7-011d99508d20-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-wz7d2\" (UID: \"ae92c75f-face-43b3-8dd7-011d99508d20\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-wz7d2" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.186447 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjbdt\" (UniqueName: \"kubernetes.io/projected/c24a841d-7735-4fc3-b1b8-df1af8ae4328-kube-api-access-vjbdt\") pod \"kube-storage-version-migrator-operator-565b79b866-9dtpr\" (UID: \"c24a841d-7735-4fc3-b1b8-df1af8ae4328\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-9dtpr" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.186494 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-blzxz\" (UID: \"9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26\") " pod="openshift-authentication/oauth-openshift-66458b6674-blzxz" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.186562 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/78c92fa7-6dbe-4fef-8495-6dc6fe162b22-ca-trust-extracted\") pod \"image-registry-66587d64c8-hdk9b\" (UID: \"78c92fa7-6dbe-4fef-8495-6dc6fe162b22\") " pod="openshift-image-registry/image-registry-66587d64c8-hdk9b" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.186593 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/78c92fa7-6dbe-4fef-8495-6dc6fe162b22-bound-sa-token\") pod \"image-registry-66587d64c8-hdk9b\" (UID: \"78c92fa7-6dbe-4fef-8495-6dc6fe162b22\") " pod="openshift-image-registry/image-registry-66587d64c8-hdk9b" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.186615 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k996g\" (UniqueName: \"kubernetes.io/projected/365b2743-bb33-4572-bd95-22945397200b-kube-api-access-k996g\") pod \"migrator-866fcbc849-kd9gm\" (UID: \"365b2743-bb33-4572-bd95-22945397200b\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-kd9gm" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.186669 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/36f48e3e-03a0-42fc-ab1d-37c77fb10f65-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-dzxhh\" (UID: \"36f48e3e-03a0-42fc-ab1d-37c77fb10f65\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-dzxhh" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.186703 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-blzxz\" (UID: \"9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26\") " pod="openshift-authentication/oauth-openshift-66458b6674-blzxz" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.186725 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c24a841d-7735-4fc3-b1b8-df1af8ae4328-config\") pod \"kube-storage-version-migrator-operator-565b79b866-9dtpr\" (UID: \"c24a841d-7735-4fc3-b1b8-df1af8ae4328\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-9dtpr" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.186750 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d7766476-9154-440b-b5a3-cce6b6b7c7b4-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-fmnpk\" (UID: \"d7766476-9154-440b-b5a3-cce6b6b7c7b4\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-fmnpk" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.186803 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/a66581ea-fc96-4aba-9332-566bb17c7b71-available-featuregates\") pod \"openshift-config-operator-5777786469-zsrsd\" (UID: \"a66581ea-fc96-4aba-9332-566bb17c7b71\") " pod="openshift-config-operator/openshift-config-operator-5777786469-zsrsd" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.186829 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-s6s8q\" (UniqueName: \"kubernetes.io/projected/9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26-kube-api-access-s6s8q\") pod \"oauth-openshift-66458b6674-blzxz\" (UID: \"9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26\") " pod="openshift-authentication/oauth-openshift-66458b6674-blzxz" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.186857 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/be473823-0596-45e5-a2fd-c6016d361d20-tmpfs\") pod \"packageserver-7d4fc7d867-msx9x\" (UID: \"be473823-0596-45e5-a2fd-c6016d361d20\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-msx9x" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.186891 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a66581ea-fc96-4aba-9332-566bb17c7b71-serving-cert\") pod \"openshift-config-operator-5777786469-zsrsd\" (UID: \"a66581ea-fc96-4aba-9332-566bb17c7b71\") " pod="openshift-config-operator/openshift-config-operator-5777786469-zsrsd" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.186918 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26-audit-dir\") pod \"oauth-openshift-66458b6674-blzxz\" (UID: \"9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26\") " pod="openshift-authentication/oauth-openshift-66458b6674-blzxz" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.186956 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-blzxz\" (UID: \"9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26\") " pod="openshift-authentication/oauth-openshift-66458b6674-blzxz" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.187043 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-blzxz\" (UID: \"9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26\") " pod="openshift-authentication/oauth-openshift-66458b6674-blzxz" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.187309 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26-audit-dir\") pod \"oauth-openshift-66458b6674-blzxz\" (UID: \"9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26\") " pod="openshift-authentication/oauth-openshift-66458b6674-blzxz" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.187446 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-blzxz\" (UID: \"9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26\") " pod="openshift-authentication/oauth-openshift-66458b6674-blzxz" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.187501 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7766476-9154-440b-b5a3-cce6b6b7c7b4-serving-cert\") pod \"authentication-operator-7f5c659b84-fmnpk\" (UID: \"d7766476-9154-440b-b5a3-cce6b6b7c7b4\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-fmnpk" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.188403 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-blzxz\" (UID: \"9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26\") " pod="openshift-authentication/oauth-openshift-66458b6674-blzxz" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.188422 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/78c92fa7-6dbe-4fef-8495-6dc6fe162b22-ca-trust-extracted\") pod \"image-registry-66587d64c8-hdk9b\" (UID: \"78c92fa7-6dbe-4fef-8495-6dc6fe162b22\") " pod="openshift-image-registry/image-registry-66587d64c8-hdk9b" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.189037 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-b4x8l\" (UniqueName: \"kubernetes.io/projected/78c92fa7-6dbe-4fef-8495-6dc6fe162b22-kube-api-access-b4x8l\") pod \"image-registry-66587d64c8-hdk9b\" (UID: \"78c92fa7-6dbe-4fef-8495-6dc6fe162b22\") " pod="openshift-image-registry/image-registry-66587d64c8-hdk9b" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.189053 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-blzxz\" (UID: \"9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26\") " pod="openshift-authentication/oauth-openshift-66458b6674-blzxz" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.189106 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"trusted-ca-bundle\"" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.189167 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-blzxz\" (UID: \"9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26\") " pod="openshift-authentication/oauth-openshift-66458b6674-blzxz" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.189195 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/be473823-0596-45e5-a2fd-c6016d361d20-webhook-cert\") pod \"packageserver-7d4fc7d867-msx9x\" (UID: \"be473823-0596-45e5-a2fd-c6016d361d20\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-msx9x" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.189241 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/a66581ea-fc96-4aba-9332-566bb17c7b71-available-featuregates\") pod \"openshift-config-operator-5777786469-zsrsd\" (UID: \"a66581ea-fc96-4aba-9332-566bb17c7b71\") " pod="openshift-config-operator/openshift-config-operator-5777786469-zsrsd" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.189360 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-blzxz\" (UID: \"9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26\") " pod="openshift-authentication/oauth-openshift-66458b6674-blzxz" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.189580 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-blzxz\" (UID: \"9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26\") " pod="openshift-authentication/oauth-openshift-66458b6674-blzxz" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.190185 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/36f48e3e-03a0-42fc-ab1d-37c77fb10f65-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-dzxhh\" (UID: \"36f48e3e-03a0-42fc-ab1d-37c77fb10f65\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-dzxhh" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.190216 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26-audit-policies\") pod \"oauth-openshift-66458b6674-blzxz\" (UID: \"9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26\") " pod="openshift-authentication/oauth-openshift-66458b6674-blzxz" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.190255 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-blzxz\" (UID: \"9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26\") " pod="openshift-authentication/oauth-openshift-66458b6674-blzxz" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.190310 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/36f48e3e-03a0-42fc-ab1d-37c77fb10f65-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-dzxhh\" (UID: \"36f48e3e-03a0-42fc-ab1d-37c77fb10f65\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-dzxhh" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.190330 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/be473823-0596-45e5-a2fd-c6016d361d20-apiservice-cert\") pod \"packageserver-7d4fc7d867-msx9x\" (UID: \"be473823-0596-45e5-a2fd-c6016d361d20\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-msx9x" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.190432 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/78c92fa7-6dbe-4fef-8495-6dc6fe162b22-installation-pull-secrets\") pod \"image-registry-66587d64c8-hdk9b\" (UID: \"78c92fa7-6dbe-4fef-8495-6dc6fe162b22\") " pod="openshift-image-registry/image-registry-66587d64c8-hdk9b" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.190456 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vvjfz\" (UniqueName: \"kubernetes.io/projected/36f48e3e-03a0-42fc-ab1d-37c77fb10f65-kube-api-access-vvjfz\") pod \"ingress-operator-6b9cb4dbcf-dzxhh\" (UID: \"36f48e3e-03a0-42fc-ab1d-37c77fb10f65\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-dzxhh" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.190493 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-blzxz\" (UID: \"9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26\") " pod="openshift-authentication/oauth-openshift-66458b6674-blzxz" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.190517 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-blzxz\" (UID: \"9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26\") " pod="openshift-authentication/oauth-openshift-66458b6674-blzxz" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.190545 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae92c75f-face-43b3-8dd7-011d99508d20-config\") pod \"openshift-apiserver-operator-846cbfc458-wz7d2\" (UID: \"ae92c75f-face-43b3-8dd7-011d99508d20\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-wz7d2" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.190572 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vkv8\" (UniqueName: \"kubernetes.io/projected/ae92c75f-face-43b3-8dd7-011d99508d20-kube-api-access-9vkv8\") pod \"openshift-apiserver-operator-846cbfc458-wz7d2\" (UID: \"ae92c75f-face-43b3-8dd7-011d99508d20\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-wz7d2" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.190604 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/78c92fa7-6dbe-4fef-8495-6dc6fe162b22-registry-tls\") pod \"image-registry-66587d64c8-hdk9b\" (UID: \"78c92fa7-6dbe-4fef-8495-6dc6fe162b22\") " pod="openshift-image-registry/image-registry-66587d64c8-hdk9b" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.190631 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d7766476-9154-440b-b5a3-cce6b6b7c7b4-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-fmnpk\" (UID: \"d7766476-9154-440b-b5a3-cce6b6b7c7b4\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-fmnpk" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.190654 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtmhl\" (UniqueName: \"kubernetes.io/projected/d7766476-9154-440b-b5a3-cce6b6b7c7b4-kube-api-access-qtmhl\") pod \"authentication-operator-7f5c659b84-fmnpk\" (UID: \"d7766476-9154-440b-b5a3-cce6b6b7c7b4\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-fmnpk" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.190701 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7766476-9154-440b-b5a3-cce6b6b7c7b4-config\") pod \"authentication-operator-7f5c659b84-fmnpk\" (UID: \"d7766476-9154-440b-b5a3-cce6b6b7c7b4\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-fmnpk" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.190725 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nkn5\" (UniqueName: \"kubernetes.io/projected/be473823-0596-45e5-a2fd-c6016d361d20-kube-api-access-6nkn5\") pod \"packageserver-7d4fc7d867-msx9x\" (UID: \"be473823-0596-45e5-a2fd-c6016d361d20\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-msx9x" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.190800 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/78c92fa7-6dbe-4fef-8495-6dc6fe162b22-registry-certificates\") pod \"image-registry-66587d64c8-hdk9b\" (UID: \"78c92fa7-6dbe-4fef-8495-6dc6fe162b22\") " pod="openshift-image-registry/image-registry-66587d64c8-hdk9b" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.190907 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-pm65j\" (UniqueName: \"kubernetes.io/projected/a66581ea-fc96-4aba-9332-566bb17c7b71-kube-api-access-pm65j\") pod \"openshift-config-operator-5777786469-zsrsd\" (UID: \"a66581ea-fc96-4aba-9332-566bb17c7b71\") " pod="openshift-config-operator/openshift-config-operator-5777786469-zsrsd" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.190962 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rltpf\" (UniqueName: \"kubernetes.io/projected/29c8dcea-f999-4e33-9f5e-ef9eb8a423f7-kube-api-access-rltpf\") pod \"downloads-747b44746d-hrj8v\" (UID: \"29c8dcea-f999-4e33-9f5e-ef9eb8a423f7\") " pod="openshift-console/downloads-747b44746d-hrj8v" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.191033 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hdk9b\" (UID: \"78c92fa7-6dbe-4fef-8495-6dc6fe162b22\") " pod="openshift-image-registry/image-registry-66587d64c8-hdk9b" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.191109 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/78c92fa7-6dbe-4fef-8495-6dc6fe162b22-trusted-ca\") pod \"image-registry-66587d64c8-hdk9b\" (UID: \"78c92fa7-6dbe-4fef-8495-6dc6fe162b22\") " pod="openshift-image-registry/image-registry-66587d64c8-hdk9b" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.191132 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c24a841d-7735-4fc3-b1b8-df1af8ae4328-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-9dtpr\" (UID: \"c24a841d-7735-4fc3-b1b8-df1af8ae4328\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-9dtpr" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.191166 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-blzxz\" (UID: \"9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26\") " pod="openshift-authentication/oauth-openshift-66458b6674-blzxz" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.191935 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-blzxz\" (UID: \"9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26\") " pod="openshift-authentication/oauth-openshift-66458b6674-blzxz" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.193579 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-blzxz\" (UID: \"9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26\") " pod="openshift-authentication/oauth-openshift-66458b6674-blzxz" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.194302 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/36f48e3e-03a0-42fc-ab1d-37c77fb10f65-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-dzxhh\" (UID: \"36f48e3e-03a0-42fc-ab1d-37c77fb10f65\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-dzxhh" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.194646 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26-audit-policies\") pod \"oauth-openshift-66458b6674-blzxz\" (UID: \"9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26\") " pod="openshift-authentication/oauth-openshift-66458b6674-blzxz" Dec 12 14:12:34 crc kubenswrapper[5108]: E1212 14:12:34.203023 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:34.702988523 +0000 UTC m=+107.610979692 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hdk9b" (UID: "78c92fa7-6dbe-4fef-8495-6dc6fe162b22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.207690 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/78c92fa7-6dbe-4fef-8495-6dc6fe162b22-registry-certificates\") pod \"image-registry-66587d64c8-hdk9b\" (UID: \"78c92fa7-6dbe-4fef-8495-6dc6fe162b22\") " pod="openshift-image-registry/image-registry-66587d64c8-hdk9b" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.208705 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-blzxz\" (UID: \"9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26\") " pod="openshift-authentication/oauth-openshift-66458b6674-blzxz" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.209424 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-blzxz\" (UID: \"9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26\") " pod="openshift-authentication/oauth-openshift-66458b6674-blzxz" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.210033 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/36f48e3e-03a0-42fc-ab1d-37c77fb10f65-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-dzxhh\" (UID: \"36f48e3e-03a0-42fc-ab1d-37c77fb10f65\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-dzxhh" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.210495 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-blzxz\" (UID: \"9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26\") " pod="openshift-authentication/oauth-openshift-66458b6674-blzxz" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.210671 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-blzxz\" (UID: \"9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26\") " pod="openshift-authentication/oauth-openshift-66458b6674-blzxz" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.210894 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a66581ea-fc96-4aba-9332-566bb17c7b71-serving-cert\") pod \"openshift-config-operator-5777786469-zsrsd\" (UID: \"a66581ea-fc96-4aba-9332-566bb17c7b71\") " pod="openshift-config-operator/openshift-config-operator-5777786469-zsrsd" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.211074 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-blzxz\" (UID: \"9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26\") " pod="openshift-authentication/oauth-openshift-66458b6674-blzxz" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.211270 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-blzxz\" (UID: \"9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26\") " pod="openshift-authentication/oauth-openshift-66458b6674-blzxz" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.211321 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/78c92fa7-6dbe-4fef-8495-6dc6fe162b22-registry-tls\") pod \"image-registry-66587d64c8-hdk9b\" (UID: \"78c92fa7-6dbe-4fef-8495-6dc6fe162b22\") " pod="openshift-image-registry/image-registry-66587d64c8-hdk9b" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.211516 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/78c92fa7-6dbe-4fef-8495-6dc6fe162b22-installation-pull-secrets\") pod \"image-registry-66587d64c8-hdk9b\" (UID: \"78c92fa7-6dbe-4fef-8495-6dc6fe162b22\") " pod="openshift-image-registry/image-registry-66587d64c8-hdk9b" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.220033 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/78c92fa7-6dbe-4fef-8495-6dc6fe162b22-trusted-ca\") pod \"image-registry-66587d64c8-hdk9b\" (UID: \"78c92fa7-6dbe-4fef-8495-6dc6fe162b22\") " pod="openshift-image-registry/image-registry-66587d64c8-hdk9b" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.223538 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-operator-tls\"" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.242409 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"cluster-image-registry-operator-dockercfg-ntnd7\"" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.262512 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-dockercfg-4vdnc\"" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.282318 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-config\"" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.292315 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.292543 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7766476-9154-440b-b5a3-cce6b6b7c7b4-serving-cert\") pod \"authentication-operator-7f5c659b84-fmnpk\" (UID: \"d7766476-9154-440b-b5a3-cce6b6b7c7b4\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-fmnpk" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.292586 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/be473823-0596-45e5-a2fd-c6016d361d20-webhook-cert\") pod \"packageserver-7d4fc7d867-msx9x\" (UID: \"be473823-0596-45e5-a2fd-c6016d361d20\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-msx9x" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.292646 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/be473823-0596-45e5-a2fd-c6016d361d20-apiservice-cert\") pod \"packageserver-7d4fc7d867-msx9x\" (UID: \"be473823-0596-45e5-a2fd-c6016d361d20\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-msx9x" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.292705 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae92c75f-face-43b3-8dd7-011d99508d20-config\") pod \"openshift-apiserver-operator-846cbfc458-wz7d2\" (UID: \"ae92c75f-face-43b3-8dd7-011d99508d20\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-wz7d2" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.292738 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9vkv8\" (UniqueName: \"kubernetes.io/projected/ae92c75f-face-43b3-8dd7-011d99508d20-kube-api-access-9vkv8\") pod \"openshift-apiserver-operator-846cbfc458-wz7d2\" (UID: \"ae92c75f-face-43b3-8dd7-011d99508d20\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-wz7d2" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.292778 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d7766476-9154-440b-b5a3-cce6b6b7c7b4-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-fmnpk\" (UID: \"d7766476-9154-440b-b5a3-cce6b6b7c7b4\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-fmnpk" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.292808 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qtmhl\" (UniqueName: \"kubernetes.io/projected/d7766476-9154-440b-b5a3-cce6b6b7c7b4-kube-api-access-qtmhl\") pod \"authentication-operator-7f5c659b84-fmnpk\" (UID: \"d7766476-9154-440b-b5a3-cce6b6b7c7b4\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-fmnpk" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.292852 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7766476-9154-440b-b5a3-cce6b6b7c7b4-config\") pod \"authentication-operator-7f5c659b84-fmnpk\" (UID: \"d7766476-9154-440b-b5a3-cce6b6b7c7b4\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-fmnpk" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.292888 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6nkn5\" (UniqueName: \"kubernetes.io/projected/be473823-0596-45e5-a2fd-c6016d361d20-kube-api-access-6nkn5\") pod \"packageserver-7d4fc7d867-msx9x\" (UID: \"be473823-0596-45e5-a2fd-c6016d361d20\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-msx9x" Dec 12 14:12:34 crc kubenswrapper[5108]: E1212 14:12:34.293212 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:34.793168844 +0000 UTC m=+107.701160013 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.293511 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae92c75f-face-43b3-8dd7-011d99508d20-config\") pod \"openshift-apiserver-operator-846cbfc458-wz7d2\" (UID: \"ae92c75f-face-43b3-8dd7-011d99508d20\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-wz7d2" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.293572 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c24a841d-7735-4fc3-b1b8-df1af8ae4328-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-9dtpr\" (UID: \"c24a841d-7735-4fc3-b1b8-df1af8ae4328\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-9dtpr" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.293610 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ae92c75f-face-43b3-8dd7-011d99508d20-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-wz7d2\" (UID: \"ae92c75f-face-43b3-8dd7-011d99508d20\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-wz7d2" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.293630 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vjbdt\" (UniqueName: \"kubernetes.io/projected/c24a841d-7735-4fc3-b1b8-df1af8ae4328-kube-api-access-vjbdt\") pod \"kube-storage-version-migrator-operator-565b79b866-9dtpr\" (UID: \"c24a841d-7735-4fc3-b1b8-df1af8ae4328\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-9dtpr" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.293673 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-k996g\" (UniqueName: \"kubernetes.io/projected/365b2743-bb33-4572-bd95-22945397200b-kube-api-access-k996g\") pod \"migrator-866fcbc849-kd9gm\" (UID: \"365b2743-bb33-4572-bd95-22945397200b\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-kd9gm" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.293708 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c24a841d-7735-4fc3-b1b8-df1af8ae4328-config\") pod \"kube-storage-version-migrator-operator-565b79b866-9dtpr\" (UID: \"c24a841d-7735-4fc3-b1b8-df1af8ae4328\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-9dtpr" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.293726 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d7766476-9154-440b-b5a3-cce6b6b7c7b4-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-fmnpk\" (UID: \"d7766476-9154-440b-b5a3-cce6b6b7c7b4\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-fmnpk" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.293939 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/be473823-0596-45e5-a2fd-c6016d361d20-tmpfs\") pod \"packageserver-7d4fc7d867-msx9x\" (UID: \"be473823-0596-45e5-a2fd-c6016d361d20\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-msx9x" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.294616 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/be473823-0596-45e5-a2fd-c6016d361d20-tmpfs\") pod \"packageserver-7d4fc7d867-msx9x\" (UID: \"be473823-0596-45e5-a2fd-c6016d361d20\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-msx9x" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.294827 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d7766476-9154-440b-b5a3-cce6b6b7c7b4-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-fmnpk\" (UID: \"d7766476-9154-440b-b5a3-cce6b6b7c7b4\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-fmnpk" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.295038 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7766476-9154-440b-b5a3-cce6b6b7c7b4-config\") pod \"authentication-operator-7f5c659b84-fmnpk\" (UID: \"d7766476-9154-440b-b5a3-cce6b6b7c7b4\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-fmnpk" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.295434 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d7766476-9154-440b-b5a3-cce6b6b7c7b4-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-fmnpk\" (UID: \"d7766476-9154-440b-b5a3-cce6b6b7c7b4\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-fmnpk" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.298279 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7766476-9154-440b-b5a3-cce6b6b7c7b4-serving-cert\") pod \"authentication-operator-7f5c659b84-fmnpk\" (UID: \"d7766476-9154-440b-b5a3-cce6b6b7c7b4\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-fmnpk" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.298716 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ae92c75f-face-43b3-8dd7-011d99508d20-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-wz7d2\" (UID: \"ae92c75f-face-43b3-8dd7-011d99508d20\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-wz7d2" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.302505 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-serving-cert\"" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.322838 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"openshift-service-ca.crt\"" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.342451 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-client\"" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.361945 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-ca-bundle\"" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.382159 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-service-ca-bundle\"" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.395057 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hdk9b\" (UID: \"78c92fa7-6dbe-4fef-8495-6dc6fe162b22\") " pod="openshift-image-registry/image-registry-66587d64c8-hdk9b" Dec 12 14:12:34 crc kubenswrapper[5108]: E1212 14:12:34.395397 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:34.895378966 +0000 UTC m=+107.803370125 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hdk9b" (UID: "78c92fa7-6dbe-4fef-8495-6dc6fe162b22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.402043 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"kube-root-ca.crt\"" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.422729 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-scheduler-operator-serving-cert\"" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.442746 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-dockercfg-2wbn2\"" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.463184 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-root-ca.crt\"" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.483338 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-stats-default\"" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.496650 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:34 crc kubenswrapper[5108]: E1212 14:12:34.496896 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:34.996844049 +0000 UTC m=+107.904835208 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.497552 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hdk9b\" (UID: \"78c92fa7-6dbe-4fef-8495-6dc6fe162b22\") " pod="openshift-image-registry/image-registry-66587d64c8-hdk9b" Dec 12 14:12:34 crc kubenswrapper[5108]: E1212 14:12:34.498744 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:34.998731 +0000 UTC m=+107.906722159 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hdk9b" (UID: "78c92fa7-6dbe-4fef-8495-6dc6fe162b22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.503115 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-dockercfg-kw8fx\"" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.523199 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-certs-default\"" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.543340 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"service-ca-bundle\"" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.563070 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"openshift-service-ca.crt\"" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.582572 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-metrics-certs-default\"" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.598826 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:34 crc kubenswrapper[5108]: E1212 14:12:34.599285 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:35.099259507 +0000 UTC m=+108.007250666 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.599347 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hdk9b\" (UID: \"78c92fa7-6dbe-4fef-8495-6dc6fe162b22\") " pod="openshift-image-registry/image-registry-66587d64c8-hdk9b" Dec 12 14:12:34 crc kubenswrapper[5108]: E1212 14:12:34.599812 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:35.099804342 +0000 UTC m=+108.007795501 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hdk9b" (UID: "78c92fa7-6dbe-4fef-8495-6dc6fe162b22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.602180 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"kube-root-ca.crt\"" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.622426 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-config\"" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.643455 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-dockercfg-bf7fj\"" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.663012 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-root-ca.crt\"" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.682100 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-config\"" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.700764 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:34 crc kubenswrapper[5108]: E1212 14:12:34.700956 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:35.200933636 +0000 UTC m=+108.108924795 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.701152 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hdk9b\" (UID: \"78c92fa7-6dbe-4fef-8495-6dc6fe162b22\") " pod="openshift-image-registry/image-registry-66587d64c8-hdk9b" Dec 12 14:12:34 crc kubenswrapper[5108]: E1212 14:12:34.701419 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:35.201412168 +0000 UTC m=+108.109403327 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hdk9b" (UID: "78c92fa7-6dbe-4fef-8495-6dc6fe162b22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.703269 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-serving-cert\"" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.723689 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-controller-dockercfg-xnj77\"" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.744588 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mcc-proxy-tls\"" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.763038 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"kube-root-ca.crt\"" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.782590 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"dns-operator-dockercfg-wbbsn\"" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.802351 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:34 crc kubenswrapper[5108]: E1212 14:12:34.802584 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:35.302558323 +0000 UTC m=+108.210549482 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.802872 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hdk9b\" (UID: \"78c92fa7-6dbe-4fef-8495-6dc6fe162b22\") " pod="openshift-image-registry/image-registry-66587d64c8-hdk9b" Dec 12 14:12:34 crc kubenswrapper[5108]: E1212 14:12:34.803241 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:35.303224341 +0000 UTC m=+108.211215500 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hdk9b" (UID: "78c92fa7-6dbe-4fef-8495-6dc6fe162b22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.803452 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"metrics-tls\"" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.823375 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"openshift-service-ca.crt\"" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.843281 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"openshift-service-ca.crt\"" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.863263 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serviceaccount-dockercfg-4gqzj\"" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.882949 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"kube-root-ca.crt\"" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.900217 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-xhpw5" event={"ID":"13f26732-b6d4-41aa-bde4-988f7178ea70","Type":"ContainerStarted","Data":"cf6a559db2b5cd72521369045c41163f2ff4f9143c4965363b79cfd83700ce31"} Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.902860 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"package-server-manager-serving-cert\"" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.903335 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:34 crc kubenswrapper[5108]: E1212 14:12:34.903570 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:35.403544433 +0000 UTC m=+108.311535602 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.903979 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hdk9b\" (UID: \"78c92fa7-6dbe-4fef-8495-6dc6fe162b22\") " pod="openshift-image-registry/image-registry-66587d64c8-hdk9b" Dec 12 14:12:34 crc kubenswrapper[5108]: E1212 14:12:34.904251 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:35.404242791 +0000 UTC m=+108.312233950 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hdk9b" (UID: "78c92fa7-6dbe-4fef-8495-6dc6fe162b22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.922374 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"serving-cert\"" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.929717 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c24a841d-7735-4fc3-b1b8-df1af8ae4328-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-9dtpr\" (UID: \"c24a841d-7735-4fc3-b1b8-df1af8ae4328\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-9dtpr" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.942706 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-dockercfg-2h6bs\"" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.963144 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"config\"" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.966660 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c24a841d-7735-4fc3-b1b8-df1af8ae4328-config\") pod \"kube-storage-version-migrator-operator-565b79b866-9dtpr\" (UID: \"c24a841d-7735-4fc3-b1b8-df1af8ae4328\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-9dtpr" Dec 12 14:12:34 crc kubenswrapper[5108]: I1212 14:12:34.983437 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-root-ca.crt\"" Dec 12 14:12:35 crc kubenswrapper[5108]: I1212 14:12:35.004100 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"openshift-service-ca.crt\"" Dec 12 14:12:35 crc kubenswrapper[5108]: I1212 14:12:35.004836 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:35 crc kubenswrapper[5108]: E1212 14:12:35.005021 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:35.504988575 +0000 UTC m=+108.412979784 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:35 crc kubenswrapper[5108]: I1212 14:12:35.005437 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hdk9b\" (UID: \"78c92fa7-6dbe-4fef-8495-6dc6fe162b22\") " pod="openshift-image-registry/image-registry-66587d64c8-hdk9b" Dec 12 14:12:35 crc kubenswrapper[5108]: E1212 14:12:35.005873 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:35.505845528 +0000 UTC m=+108.413836697 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hdk9b" (UID: "78c92fa7-6dbe-4fef-8495-6dc6fe162b22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:35 crc kubenswrapper[5108]: I1212 14:12:35.022538 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-storage-version-migrator-sa-dockercfg-kknhg\"" Dec 12 14:12:35 crc kubenswrapper[5108]: I1212 14:12:35.040575 5108 request.go:752] "Waited before sending request" delay="1.00318359s" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0" Dec 12 14:12:35 crc kubenswrapper[5108]: I1212 14:12:35.043267 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-root-ca.crt\"" Dec 12 14:12:35 crc kubenswrapper[5108]: I1212 14:12:35.063284 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"openshift-service-ca.crt\"" Dec 12 14:12:35 crc kubenswrapper[5108]: I1212 14:12:35.082283 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"packageserver-service-cert\"" Dec 12 14:12:35 crc kubenswrapper[5108]: I1212 14:12:35.088568 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/be473823-0596-45e5-a2fd-c6016d361d20-apiservice-cert\") pod \"packageserver-7d4fc7d867-msx9x\" (UID: \"be473823-0596-45e5-a2fd-c6016d361d20\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-msx9x" Dec 12 14:12:35 crc kubenswrapper[5108]: I1212 14:12:35.088806 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/be473823-0596-45e5-a2fd-c6016d361d20-webhook-cert\") pod \"packageserver-7d4fc7d867-msx9x\" (UID: \"be473823-0596-45e5-a2fd-c6016d361d20\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-msx9x" Dec 12 14:12:35 crc kubenswrapper[5108]: I1212 14:12:35.102771 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-admission-controller-secret\"" Dec 12 14:12:35 crc kubenswrapper[5108]: I1212 14:12:35.107323 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:35 crc kubenswrapper[5108]: E1212 14:12:35.107455 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:35.607432744 +0000 UTC m=+108.515423903 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:35 crc kubenswrapper[5108]: I1212 14:12:35.107720 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hdk9b\" (UID: \"78c92fa7-6dbe-4fef-8495-6dc6fe162b22\") " pod="openshift-image-registry/image-registry-66587d64c8-hdk9b" Dec 12 14:12:35 crc kubenswrapper[5108]: E1212 14:12:35.108032 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:35.60802567 +0000 UTC m=+108.516016829 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hdk9b" (UID: "78c92fa7-6dbe-4fef-8495-6dc6fe162b22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:35 crc kubenswrapper[5108]: I1212 14:12:35.123381 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ac-dockercfg-gj7jx\"" Dec 12 14:12:35 crc kubenswrapper[5108]: I1212 14:12:35.150871 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"marketplace-trusted-ca\"" Dec 12 14:12:35 crc kubenswrapper[5108]: I1212 14:12:35.162025 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-metrics\"" Dec 12 14:12:35 crc kubenswrapper[5108]: I1212 14:12:35.182699 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"kube-root-ca.crt\"" Dec 12 14:12:35 crc kubenswrapper[5108]: I1212 14:12:35.202921 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"openshift-service-ca.crt\"" Dec 12 14:12:35 crc kubenswrapper[5108]: I1212 14:12:35.208993 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:35 crc kubenswrapper[5108]: E1212 14:12:35.209167 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:35.709143734 +0000 UTC m=+108.617134893 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:35 crc kubenswrapper[5108]: I1212 14:12:35.209434 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 14:12:35 crc kubenswrapper[5108]: I1212 14:12:35.209477 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hdk9b\" (UID: \"78c92fa7-6dbe-4fef-8495-6dc6fe162b22\") " pod="openshift-image-registry/image-registry-66587d64c8-hdk9b" Dec 12 14:12:35 crc kubenswrapper[5108]: I1212 14:12:35.209543 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 14:12:35 crc kubenswrapper[5108]: E1212 14:12:35.209597 5108 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 12 14:12:35 crc kubenswrapper[5108]: E1212 14:12:35.209629 5108 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 12 14:12:35 crc kubenswrapper[5108]: E1212 14:12:35.209646 5108 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 14:12:35 crc kubenswrapper[5108]: E1212 14:12:35.209680 5108 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 12 14:12:35 crc kubenswrapper[5108]: E1212 14:12:35.209716 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-12 14:12:51.209709368 +0000 UTC m=+124.117700527 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 12 14:12:35 crc kubenswrapper[5108]: I1212 14:12:35.209601 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 14:12:35 crc kubenswrapper[5108]: E1212 14:12:35.209748 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2025-12-12 14:12:51.209725659 +0000 UTC m=+124.117716848 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 14:12:35 crc kubenswrapper[5108]: E1212 14:12:35.209769 5108 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 12 14:12:35 crc kubenswrapper[5108]: I1212 14:12:35.209799 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 12 14:12:35 crc kubenswrapper[5108]: E1212 14:12:35.209901 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-12 14:12:51.209868202 +0000 UTC m=+124.117859391 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 12 14:12:35 crc kubenswrapper[5108]: E1212 14:12:35.209938 5108 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 12 14:12:35 crc kubenswrapper[5108]: E1212 14:12:35.209956 5108 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 12 14:12:35 crc kubenswrapper[5108]: E1212 14:12:35.209970 5108 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 14:12:35 crc kubenswrapper[5108]: E1212 14:12:35.210014 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2025-12-12 14:12:51.210001467 +0000 UTC m=+124.117992656 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 14:12:35 crc kubenswrapper[5108]: E1212 14:12:35.210118 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:35.710064939 +0000 UTC m=+108.618056098 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hdk9b" (UID: "78c92fa7-6dbe-4fef-8495-6dc6fe162b22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:35 crc kubenswrapper[5108]: I1212 14:12:35.223123 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-dockercfg-2cfkp\"" Dec 12 14:12:35 crc kubenswrapper[5108]: I1212 14:12:35.242235 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serving-cert\"" Dec 12 14:12:35 crc kubenswrapper[5108]: I1212 14:12:35.271635 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"pprof-cert\"" Dec 12 14:12:35 crc kubenswrapper[5108]: I1212 14:12:35.282921 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-tls\"" Dec 12 14:12:35 crc kubenswrapper[5108]: I1212 14:12:35.302275 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-dockercfg-gnx66\"" Dec 12 14:12:35 crc kubenswrapper[5108]: I1212 14:12:35.310857 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:35 crc kubenswrapper[5108]: E1212 14:12:35.310942 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:35.810922925 +0000 UTC m=+108.718914074 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:35 crc kubenswrapper[5108]: I1212 14:12:35.311297 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hdk9b\" (UID: \"78c92fa7-6dbe-4fef-8495-6dc6fe162b22\") " pod="openshift-image-registry/image-registry-66587d64c8-hdk9b" Dec 12 14:12:35 crc kubenswrapper[5108]: E1212 14:12:35.311624 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:35.811612213 +0000 UTC m=+108.719603372 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hdk9b" (UID: "78c92fa7-6dbe-4fef-8495-6dc6fe162b22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:35 crc kubenswrapper[5108]: I1212 14:12:35.322915 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Dec 12 14:12:35 crc kubenswrapper[5108]: I1212 14:12:35.342069 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Dec 12 14:12:35 crc kubenswrapper[5108]: I1212 14:12:35.362757 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-images\"" Dec 12 14:12:35 crc kubenswrapper[5108]: I1212 14:12:35.383798 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mco-proxy-tls\"" Dec 12 14:12:35 crc kubenswrapper[5108]: I1212 14:12:35.402840 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-dockercfg-sw6nc\"" Dec 12 14:12:35 crc kubenswrapper[5108]: I1212 14:12:35.407340 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 14:12:35 crc kubenswrapper[5108]: I1212 14:12:35.412280 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p4g92" Dec 12 14:12:35 crc kubenswrapper[5108]: I1212 14:12:35.412461 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 14:12:35 crc kubenswrapper[5108]: I1212 14:12:35.413311 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:35 crc kubenswrapper[5108]: E1212 14:12:35.414438 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:35.914401331 +0000 UTC m=+108.822392540 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:35 crc kubenswrapper[5108]: I1212 14:12:35.419229 5108 scope.go:117] "RemoveContainer" containerID="efef1b5c827deceb497f177cb13ac091fbaccfce00faad7e3daee74ab981b6b9" Dec 12 14:12:35 crc kubenswrapper[5108]: I1212 14:12:35.419663 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 12 14:12:35 crc kubenswrapper[5108]: I1212 14:12:35.449373 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2vgg\" (UniqueName: \"kubernetes.io/projected/854f71d2-1dd7-45d6-b368-e879d3a14f59-kube-api-access-n2vgg\") pod \"console-operator-67c89758df-cjjqx\" (UID: \"854f71d2-1dd7-45d6-b368-e879d3a14f59\") " pod="openshift-console-operator/console-operator-67c89758df-cjjqx" Dec 12 14:12:35 crc kubenswrapper[5108]: I1212 14:12:35.464745 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rw9gl\" (UniqueName: \"kubernetes.io/projected/b6da6d66-adc4-4cd5-968f-21877a7820f0-kube-api-access-rw9gl\") pod \"controller-manager-65b6cccf98-tx2lf\" (UID: \"b6da6d66-adc4-4cd5-968f-21877a7820f0\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-tx2lf" Dec 12 14:12:35 crc kubenswrapper[5108]: I1212 14:12:35.482517 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gq8pl\" (UniqueName: \"kubernetes.io/projected/26503754-c774-4a77-8b46-f1bd96f096b4-kube-api-access-gq8pl\") pod \"cluster-samples-operator-6b564684c8-bvv8g\" (UID: \"26503754-c774-4a77-8b46-f1bd96f096b4\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-bvv8g" Dec 12 14:12:35 crc kubenswrapper[5108]: I1212 14:12:35.507879 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xpxj4\" (UniqueName: \"kubernetes.io/projected/4149b83c-6a14-4f2f-b097-e59fcb47b122-kube-api-access-xpxj4\") pod \"machine-approver-54c688565-qwk4x\" (UID: \"4149b83c-6a14-4f2f-b097-e59fcb47b122\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-qwk4x" Dec 12 14:12:35 crc kubenswrapper[5108]: I1212 14:12:35.512423 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-bvv8g" Dec 12 14:12:35 crc kubenswrapper[5108]: I1212 14:12:35.515669 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d8c95a75-0c3b-4caa-9b09-30c6dca73e72-metrics-certs\") pod \"network-metrics-daemon-p4g92\" (UID: \"d8c95a75-0c3b-4caa-9b09-30c6dca73e72\") " pod="openshift-multus/network-metrics-daemon-p4g92" Dec 12 14:12:35 crc kubenswrapper[5108]: I1212 14:12:35.515814 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hdk9b\" (UID: \"78c92fa7-6dbe-4fef-8495-6dc6fe162b22\") " pod="openshift-image-registry/image-registry-66587d64c8-hdk9b" Dec 12 14:12:35 crc kubenswrapper[5108]: E1212 14:12:35.516217 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:36.016200213 +0000 UTC m=+108.924191372 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hdk9b" (UID: "78c92fa7-6dbe-4fef-8495-6dc6fe162b22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:35 crc kubenswrapper[5108]: I1212 14:12:35.529253 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rkknk\" (UniqueName: \"kubernetes.io/projected/8e90fae7-1eff-4924-ba9c-a1325c4099e9-kube-api-access-rkknk\") pod \"machine-api-operator-755bb95488-ztlbz\" (UID: \"8e90fae7-1eff-4924-ba9c-a1325c4099e9\") " pod="openshift-machine-api/machine-api-operator-755bb95488-ztlbz" Dec 12 14:12:35 crc kubenswrapper[5108]: I1212 14:12:35.542950 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7f865\" (UniqueName: \"kubernetes.io/projected/a0eab168-419a-4cb1-b318-244a89a1af5e-kube-api-access-7f865\") pod \"route-controller-manager-776cdc94d6-wspns\" (UID: \"a0eab168-419a-4cb1-b318-244a89a1af5e\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-wspns" Dec 12 14:12:35 crc kubenswrapper[5108]: I1212 14:12:35.555293 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-67c89758df-cjjqx" Dec 12 14:12:35 crc kubenswrapper[5108]: I1212 14:12:35.560439 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmwh7\" (UniqueName: \"kubernetes.io/projected/b2a054ba-6a30-47c2-b042-8e859282af9c-kube-api-access-vmwh7\") pod \"apiserver-8596bd845d-w9pnf\" (UID: \"b2a054ba-6a30-47c2-b042-8e859282af9c\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-w9pnf" Dec 12 14:12:35 crc kubenswrapper[5108]: I1212 14:12:35.581109 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-86pzf\" (UniqueName: \"kubernetes.io/projected/d2d38bed-cc7f-4c81-a918-78814a48a49f-kube-api-access-86pzf\") pod \"console-64d44f6ddf-np6kd\" (UID: \"d2d38bed-cc7f-4c81-a918-78814a48a49f\") " pod="openshift-console/console-64d44f6ddf-np6kd" Dec 12 14:12:35 crc kubenswrapper[5108]: I1212 14:12:35.582531 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-dockercfg-bjqfd\"" Dec 12 14:12:35 crc kubenswrapper[5108]: I1212 14:12:35.603095 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"openshift-service-ca.crt\"" Dec 12 14:12:35 crc kubenswrapper[5108]: I1212 14:12:35.616813 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:35 crc kubenswrapper[5108]: E1212 14:12:35.617092 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:36.117046349 +0000 UTC m=+109.025037518 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:35 crc kubenswrapper[5108]: I1212 14:12:35.617390 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hdk9b\" (UID: \"78c92fa7-6dbe-4fef-8495-6dc6fe162b22\") " pod="openshift-image-registry/image-registry-66587d64c8-hdk9b" Dec 12 14:12:35 crc kubenswrapper[5108]: E1212 14:12:35.617789 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:36.117771389 +0000 UTC m=+109.025762548 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hdk9b" (UID: "78c92fa7-6dbe-4fef-8495-6dc6fe162b22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:35 crc kubenswrapper[5108]: I1212 14:12:35.622879 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"serving-cert\"" Dec 12 14:12:35 crc kubenswrapper[5108]: I1212 14:12:35.644103 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"kube-root-ca.crt\"" Dec 12 14:12:35 crc kubenswrapper[5108]: I1212 14:12:35.662594 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-config\"" Dec 12 14:12:35 crc kubenswrapper[5108]: I1212 14:12:35.686467 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-serving-cert\"" Dec 12 14:12:35 crc kubenswrapper[5108]: I1212 14:12:35.699353 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-bvv8g"] Dec 12 14:12:35 crc kubenswrapper[5108]: I1212 14:12:35.702487 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-dockercfg-tnfx9\"" Dec 12 14:12:35 crc kubenswrapper[5108]: I1212 14:12:35.719062 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:35 crc kubenswrapper[5108]: E1212 14:12:35.719234 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:36.219206121 +0000 UTC m=+109.127197280 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:35 crc kubenswrapper[5108]: I1212 14:12:35.719659 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hdk9b\" (UID: \"78c92fa7-6dbe-4fef-8495-6dc6fe162b22\") " pod="openshift-image-registry/image-registry-66587d64c8-hdk9b" Dec 12 14:12:35 crc kubenswrapper[5108]: E1212 14:12:35.721006 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:36.220977568 +0000 UTC m=+109.128968727 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hdk9b" (UID: "78c92fa7-6dbe-4fef-8495-6dc6fe162b22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:35 crc kubenswrapper[5108]: I1212 14:12:35.722899 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-config\"" Dec 12 14:12:35 crc kubenswrapper[5108]: I1212 14:12:35.730915 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-67c89758df-cjjqx"] Dec 12 14:12:35 crc kubenswrapper[5108]: I1212 14:12:35.741968 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-root-ca.crt\"" Dec 12 14:12:35 crc kubenswrapper[5108]: I1212 14:12:35.742462 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-tx2lf" Dec 12 14:12:35 crc kubenswrapper[5108]: I1212 14:12:35.770012 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-755bb95488-ztlbz" Dec 12 14:12:35 crc kubenswrapper[5108]: I1212 14:12:35.788261 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-x76zx\" (UniqueName: \"kubernetes.io/projected/4ec1fbc1-dd55-49ef-b374-28698de88e40-kube-api-access-x76zx\") pod \"apiserver-9ddfb9f55-5k7p6\" (UID: \"4ec1fbc1-dd55-49ef-b374-28698de88e40\") " pod="openshift-apiserver/apiserver-9ddfb9f55-5k7p6" Dec 12 14:12:35 crc kubenswrapper[5108]: I1212 14:12:35.788892 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-64d44f6ddf-np6kd" Dec 12 14:12:35 crc kubenswrapper[5108]: I1212 14:12:35.794105 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-54c688565-qwk4x" Dec 12 14:12:35 crc kubenswrapper[5108]: I1212 14:12:35.801681 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"catalog-operator-serving-cert\"" Dec 12 14:12:35 crc kubenswrapper[5108]: I1212 14:12:35.804813 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-8596bd845d-w9pnf" Dec 12 14:12:35 crc kubenswrapper[5108]: I1212 14:12:35.820535 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:35 crc kubenswrapper[5108]: E1212 14:12:35.820909 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:36.32089019 +0000 UTC m=+109.228881349 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:35 crc kubenswrapper[5108]: I1212 14:12:35.821037 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-wspns" Dec 12 14:12:35 crc kubenswrapper[5108]: I1212 14:12:35.823113 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-serving-cert\"" Dec 12 14:12:35 crc kubenswrapper[5108]: I1212 14:12:35.847091 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-config\"" Dec 12 14:12:35 crc kubenswrapper[5108]: I1212 14:12:35.862058 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-service-ca.crt\"" Dec 12 14:12:35 crc kubenswrapper[5108]: I1212 14:12:35.882445 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-dockercfg-jcmfj\"" Dec 12 14:12:35 crc kubenswrapper[5108]: I1212 14:12:35.891618 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-tx2lf"] Dec 12 14:12:35 crc kubenswrapper[5108]: W1212 14:12:35.900229 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb6da6d66_adc4_4cd5_968f_21877a7820f0.slice/crio-3ce1c24ea79c387fdd62c92220e54975a547982573e90c38b74079d8617351b5 WatchSource:0}: Error finding container 3ce1c24ea79c387fdd62c92220e54975a547982573e90c38b74079d8617351b5: Status 404 returned error can't find the container with id 3ce1c24ea79c387fdd62c92220e54975a547982573e90c38b74079d8617351b5 Dec 12 14:12:35 crc kubenswrapper[5108]: I1212 14:12:35.902879 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"kube-root-ca.crt\"" Dec 12 14:12:35 crc kubenswrapper[5108]: I1212 14:12:35.904597 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-67c89758df-cjjqx" event={"ID":"854f71d2-1dd7-45d6-b368-e879d3a14f59","Type":"ContainerStarted","Data":"3d87a02dcbf4fdd61ae9890f485106a10e43bbc9c213eef5af58b56397ec4311"} Dec 12 14:12:35 crc kubenswrapper[5108]: I1212 14:12:35.922247 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hdk9b\" (UID: \"78c92fa7-6dbe-4fef-8495-6dc6fe162b22\") " pod="openshift-image-registry/image-registry-66587d64c8-hdk9b" Dec 12 14:12:35 crc kubenswrapper[5108]: I1212 14:12:35.922446 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"service-ca-dockercfg-bgxvm\"" Dec 12 14:12:35 crc kubenswrapper[5108]: E1212 14:12:35.922718 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:36.422697332 +0000 UTC m=+109.330688491 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hdk9b" (UID: "78c92fa7-6dbe-4fef-8495-6dc6fe162b22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:35 crc kubenswrapper[5108]: I1212 14:12:35.943327 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"signing-key\"" Dec 12 14:12:35 crc kubenswrapper[5108]: I1212 14:12:35.962697 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"kube-root-ca.crt\"" Dec 12 14:12:35 crc kubenswrapper[5108]: I1212 14:12:35.983020 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"openshift-service-ca.crt\"" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.002790 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"signing-cabundle\"" Dec 12 14:12:36 crc kubenswrapper[5108]: W1212 14:12:36.015723 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4149b83c_6a14_4f2f_b097_e59fcb47b122.slice/crio-0880d30e799cd23c08ac53bd308e56e2c1008c033c38ad4cebd3232211b3ccd3 WatchSource:0}: Error finding container 0880d30e799cd23c08ac53bd308e56e2c1008c033c38ad4cebd3232211b3ccd3: Status 404 returned error can't find the container with id 0880d30e799cd23c08ac53bd308e56e2c1008c033c38ad4cebd3232211b3ccd3 Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.023209 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:36 crc kubenswrapper[5108]: E1212 14:12:36.023476 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:36.523460405 +0000 UTC m=+109.431451554 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.024732 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"canary-serving-cert\"" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.041097 5108 request.go:752] "Waited before sending request" delay="1.931857561s" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-canary/secrets?fieldSelector=metadata.name%3Ddefault-dockercfg-9pgs7&limit=500&resourceVersion=0" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.045409 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-9pgs7\"" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.058138 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-9ddfb9f55-5k7p6" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.065479 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"kube-root-ca.crt\"" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.082299 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"openshift-service-ca.crt\"" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.105862 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"hostpath-provisioner\"/\"csi-hostpath-provisioner-sa-dockercfg-7dcws\"" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.122961 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"kube-root-ca.crt\"" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.124764 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hdk9b\" (UID: \"78c92fa7-6dbe-4fef-8495-6dc6fe162b22\") " pod="openshift-image-registry/image-registry-66587d64c8-hdk9b" Dec 12 14:12:36 crc kubenswrapper[5108]: E1212 14:12:36.125267 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:36.625247757 +0000 UTC m=+109.533238916 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hdk9b" (UID: "78c92fa7-6dbe-4fef-8495-6dc6fe162b22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.144294 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"openshift-service-ca.crt\"" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.163245 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-dockercfg-dzw6b\"" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.182782 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"node-bootstrapper-token\"" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.208210 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-tls\"" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.223016 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-sysctl-allowlist\"" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.226754 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:36 crc kubenswrapper[5108]: E1212 14:12:36.227121 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:36.72710486 +0000 UTC m=+109.635096009 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.244347 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"dns-default\"" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.266066 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-kpvmz\"" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.282035 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-default-metrics-tls\"" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.325283 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-s6s8q\" (UniqueName: \"kubernetes.io/projected/9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26-kube-api-access-s6s8q\") pod \"oauth-openshift-66458b6674-blzxz\" (UID: \"9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26\") " pod="openshift-authentication/oauth-openshift-66458b6674-blzxz" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.328327 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hdk9b\" (UID: \"78c92fa7-6dbe-4fef-8495-6dc6fe162b22\") " pod="openshift-image-registry/image-registry-66587d64c8-hdk9b" Dec 12 14:12:36 crc kubenswrapper[5108]: E1212 14:12:36.328622 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:36.828609254 +0000 UTC m=+109.736600413 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hdk9b" (UID: "78c92fa7-6dbe-4fef-8495-6dc6fe162b22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.347690 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/78c92fa7-6dbe-4fef-8495-6dc6fe162b22-bound-sa-token\") pod \"image-registry-66587d64c8-hdk9b\" (UID: \"78c92fa7-6dbe-4fef-8495-6dc6fe162b22\") " pod="openshift-image-registry/image-registry-66587d64c8-hdk9b" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.359265 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/36f48e3e-03a0-42fc-ab1d-37c77fb10f65-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-dzxhh\" (UID: \"36f48e3e-03a0-42fc-ab1d-37c77fb10f65\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-dzxhh" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.402160 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4x8l\" (UniqueName: \"kubernetes.io/projected/78c92fa7-6dbe-4fef-8495-6dc6fe162b22-kube-api-access-b4x8l\") pod \"image-registry-66587d64c8-hdk9b\" (UID: \"78c92fa7-6dbe-4fef-8495-6dc6fe162b22\") " pod="openshift-image-registry/image-registry-66587d64c8-hdk9b" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.418882 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rltpf\" (UniqueName: \"kubernetes.io/projected/29c8dcea-f999-4e33-9f5e-ef9eb8a423f7-kube-api-access-rltpf\") pod \"downloads-747b44746d-hrj8v\" (UID: \"29c8dcea-f999-4e33-9f5e-ef9eb8a423f7\") " pod="openshift-console/downloads-747b44746d-hrj8v" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.418961 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvjfz\" (UniqueName: \"kubernetes.io/projected/36f48e3e-03a0-42fc-ab1d-37c77fb10f65-kube-api-access-vvjfz\") pod \"ingress-operator-6b9cb4dbcf-dzxhh\" (UID: \"36f48e3e-03a0-42fc-ab1d-37c77fb10f65\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-dzxhh" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.429270 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:36 crc kubenswrapper[5108]: E1212 14:12:36.429613 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:36.929591164 +0000 UTC m=+109.837582323 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.442971 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-pm65j\" (UniqueName: \"kubernetes.io/projected/a66581ea-fc96-4aba-9332-566bb17c7b71-kube-api-access-pm65j\") pod \"openshift-config-operator-5777786469-zsrsd\" (UID: \"a66581ea-fc96-4aba-9332-566bb17c7b71\") " pod="openshift-config-operator/openshift-config-operator-5777786469-zsrsd" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.445368 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-5777786469-zsrsd" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.465392 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-747b44746d-hrj8v" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.477762 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6nkn5\" (UniqueName: \"kubernetes.io/projected/be473823-0596-45e5-a2fd-c6016d361d20-kube-api-access-6nkn5\") pod \"packageserver-7d4fc7d867-msx9x\" (UID: \"be473823-0596-45e5-a2fd-c6016d361d20\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-msx9x" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.478246 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-blzxz" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.484061 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-5k7p6"] Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.505741 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qtmhl\" (UniqueName: \"kubernetes.io/projected/d7766476-9154-440b-b5a3-cce6b6b7c7b4-kube-api-access-qtmhl\") pod \"authentication-operator-7f5c659b84-fmnpk\" (UID: \"d7766476-9154-440b-b5a3-cce6b6b7c7b4\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-fmnpk" Dec 12 14:12:36 crc kubenswrapper[5108]: E1212 14:12:36.516287 5108 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: failed to sync secret cache: timed out waiting for the condition Dec 12 14:12:36 crc kubenswrapper[5108]: E1212 14:12:36.516390 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d8c95a75-0c3b-4caa-9b09-30c6dca73e72-metrics-certs podName:d8c95a75-0c3b-4caa-9b09-30c6dca73e72 nodeName:}" failed. No retries permitted until 2025-12-12 14:12:52.516370583 +0000 UTC m=+125.424361742 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d8c95a75-0c3b-4caa-9b09-30c6dca73e72-metrics-certs") pod "network-metrics-daemon-p4g92" (UID: "d8c95a75-0c3b-4caa-9b09-30c6dca73e72") : failed to sync secret cache: timed out waiting for the condition Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.521991 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-wspns"] Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.527098 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-dzxhh" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.530690 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-64d44f6ddf-np6kd"] Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.530777 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hdk9b\" (UID: \"78c92fa7-6dbe-4fef-8495-6dc6fe162b22\") " pod="openshift-image-registry/image-registry-66587d64c8-hdk9b" Dec 12 14:12:36 crc kubenswrapper[5108]: E1212 14:12:36.531138 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:37.031120988 +0000 UTC m=+109.939112147 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hdk9b" (UID: "78c92fa7-6dbe-4fef-8495-6dc6fe162b22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.533056 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-ztlbz"] Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.535340 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vkv8\" (UniqueName: \"kubernetes.io/projected/ae92c75f-face-43b3-8dd7-011d99508d20-kube-api-access-9vkv8\") pod \"openshift-apiserver-operator-846cbfc458-wz7d2\" (UID: \"ae92c75f-face-43b3-8dd7-011d99508d20\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-wz7d2" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.536665 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-wz7d2" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.542250 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjbdt\" (UniqueName: \"kubernetes.io/projected/c24a841d-7735-4fc3-b1b8-df1af8ae4328-kube-api-access-vjbdt\") pod \"kube-storage-version-migrator-operator-565b79b866-9dtpr\" (UID: \"c24a841d-7735-4fc3-b1b8-df1af8ae4328\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-9dtpr" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.550290 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-fmnpk" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.569161 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-console\"/\"networking-console-plugin\"" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.570737 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-k996g\" (UniqueName: \"kubernetes.io/projected/365b2743-bb33-4572-bd95-22945397200b-kube-api-access-k996g\") pod \"migrator-866fcbc849-kd9gm\" (UID: \"365b2743-bb33-4572-bd95-22945397200b\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-kd9gm" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.570850 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-w9pnf"] Dec 12 14:12:36 crc kubenswrapper[5108]: W1212 14:12:36.578180 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd2d38bed_cc7f_4c81_a918_78814a48a49f.slice/crio-4a5591b3302c682102cb6e91bfd82cd9a49e87b3e01d9bf861588431972ca985 WatchSource:0}: Error finding container 4a5591b3302c682102cb6e91bfd82cd9a49e87b3e01d9bf861588431972ca985: Status 404 returned error can't find the container with id 4a5591b3302c682102cb6e91bfd82cd9a49e87b3e01d9bf861588431972ca985 Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.583157 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"networking-console-plugin-cert\"" Dec 12 14:12:36 crc kubenswrapper[5108]: W1212 14:12:36.604467 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb2a054ba_6a30_47c2_b042_8e859282af9c.slice/crio-01aecddecc567d4aa129c917eec78eea9bff701b9ff084bb81cecd9f5f62f3b5 WatchSource:0}: Error finding container 01aecddecc567d4aa129c917eec78eea9bff701b9ff084bb81cecd9f5f62f3b5: Status 404 returned error can't find the container with id 01aecddecc567d4aa129c917eec78eea9bff701b9ff084bb81cecd9f5f62f3b5 Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.604730 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.623622 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-t8n29\"" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.632009 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:36 crc kubenswrapper[5108]: E1212 14:12:36.632608 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:37.132501729 +0000 UTC m=+110.040492898 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.644985 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.661667 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.668373 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-9dtpr" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.675547 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-kd9gm" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.685963 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-msx9x" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.734923 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f7426742-3adb-48e6-be7c-375e4860babe-metrics-tls\") pod \"dns-operator-799b87ffcd-87mjz\" (UID: \"f7426742-3adb-48e6-be7c-375e4860babe\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-87mjz" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.737413 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4zc2\" (UniqueName: \"kubernetes.io/projected/f7426742-3adb-48e6-be7c-375e4860babe-kube-api-access-f4zc2\") pod \"dns-operator-799b87ffcd-87mjz\" (UID: \"f7426742-3adb-48e6-be7c-375e4860babe\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-87mjz" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.737473 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e8da381c-e396-48dd-a445-40d51d56858d-config-volume\") pod \"collect-profiles-29425800-msrsb\" (UID: \"e8da381c-e396-48dd-a445-40d51d56858d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425800-msrsb" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.737504 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/81011d2a-11c4-491a-beeb-7af27e94f011-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-hzphx\" (UID: \"81011d2a-11c4-491a-beeb-7af27e94f011\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-hzphx" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.737521 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/2431091b-9e2c-4eb0-993a-a6893cb79df1-etcd-ca\") pod \"etcd-operator-69b85846b6-r8lf9\" (UID: \"2431091b-9e2c-4eb0-993a-a6893cb79df1\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-r8lf9" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.737555 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e8da381c-e396-48dd-a445-40d51d56858d-secret-volume\") pod \"collect-profiles-29425800-msrsb\" (UID: \"e8da381c-e396-48dd-a445-40d51d56858d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425800-msrsb" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.737584 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/5bcda332-2469-4e96-911f-6091f5ab33ee-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-77nhq\" (UID: \"5bcda332-2469-4e96-911f-6091f5ab33ee\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-77nhq" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.737602 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlpcg\" (UniqueName: \"kubernetes.io/projected/02ff303a-02e9-48e1-a9c5-919ddbc4988a-kube-api-access-nlpcg\") pod \"machine-config-controller-f9cdd68f7-8gpql\" (UID: \"02ff303a-02e9-48e1-a9c5-919ddbc4988a\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-8gpql" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.737632 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1000c2c0-f2b5-4e8b-b0c8-58ec52b524b1-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-qwvxc\" (UID: \"1000c2c0-f2b5-4e8b-b0c8-58ec52b524b1\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-qwvxc" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.737647 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f7426742-3adb-48e6-be7c-375e4860babe-tmp-dir\") pod \"dns-operator-799b87ffcd-87mjz\" (UID: \"f7426742-3adb-48e6-be7c-375e4860babe\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-87mjz" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.737662 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f7722b55-5f8c-4480-8ad6-af633ecad9d2-serving-cert\") pod \"kube-apiserver-operator-575994946d-6dfcz\" (UID: \"f7722b55-5f8c-4480-8ad6-af633ecad9d2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-6dfcz" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.737675 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ea8bf4d2-1a9a-47e3-8013-e422b80a164b-srv-cert\") pod \"olm-operator-5cdf44d969-g7zm7\" (UID: \"ea8bf4d2-1a9a-47e3-8013-e422b80a164b\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-g7zm7" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.737696 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c1d808ae-6ef1-4996-8f03-fa3102c0d165-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-c7wpt\" (UID: \"c1d808ae-6ef1-4996-8f03-fa3102c0d165\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-c7wpt" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.737714 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81011d2a-11c4-491a-beeb-7af27e94f011-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-hzphx\" (UID: \"81011d2a-11c4-491a-beeb-7af27e94f011\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-hzphx" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.737786 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2431091b-9e2c-4eb0-993a-a6893cb79df1-config\") pod \"etcd-operator-69b85846b6-r8lf9\" (UID: \"2431091b-9e2c-4eb0-993a-a6893cb79df1\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-r8lf9" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.737801 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f7722b55-5f8c-4480-8ad6-af633ecad9d2-kube-api-access\") pod \"kube-apiserver-operator-575994946d-6dfcz\" (UID: \"f7722b55-5f8c-4480-8ad6-af633ecad9d2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-6dfcz" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.737822 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/81011d2a-11c4-491a-beeb-7af27e94f011-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-hzphx\" (UID: \"81011d2a-11c4-491a-beeb-7af27e94f011\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-hzphx" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.737839 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/b84e24d9-6489-40df-a5bc-a2f2e09fcbb7-stats-auth\") pod \"router-default-68cf44c8b8-lcqd6\" (UID: \"b84e24d9-6489-40df-a5bc-a2f2e09fcbb7\") " pod="openshift-ingress/router-default-68cf44c8b8-lcqd6" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.737855 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1000c2c0-f2b5-4e8b-b0c8-58ec52b524b1-tmp\") pod \"cluster-image-registry-operator-86c45576b9-qwvxc\" (UID: \"1000c2c0-f2b5-4e8b-b0c8-58ec52b524b1\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-qwvxc" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.737871 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/1000c2c0-f2b5-4e8b-b0c8-58ec52b524b1-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-qwvxc\" (UID: \"1000c2c0-f2b5-4e8b-b0c8-58ec52b524b1\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-qwvxc" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.737905 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/2431091b-9e2c-4eb0-993a-a6893cb79df1-tmp-dir\") pod \"etcd-operator-69b85846b6-r8lf9\" (UID: \"2431091b-9e2c-4eb0-993a-a6893cb79df1\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-r8lf9" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.737922 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b84e24d9-6489-40df-a5bc-a2f2e09fcbb7-metrics-certs\") pod \"router-default-68cf44c8b8-lcqd6\" (UID: \"b84e24d9-6489-40df-a5bc-a2f2e09fcbb7\") " pod="openshift-ingress/router-default-68cf44c8b8-lcqd6" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.737949 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1000c2c0-f2b5-4e8b-b0c8-58ec52b524b1-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-qwvxc\" (UID: \"1000c2c0-f2b5-4e8b-b0c8-58ec52b524b1\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-qwvxc" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.737989 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/378c9b4b-6598-489c-9af4-b776c79341f6-tmp\") pod \"marketplace-operator-547dbd544d-l5t96\" (UID: \"378c9b4b-6598-489c-9af4-b776c79341f6\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-l5t96" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.738012 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/72d68d2a-4326-4f35-b609-3d29fc70a888-config\") pod \"service-ca-operator-5b9c976747-7tkh8\" (UID: \"72d68d2a-4326-4f35-b609-3d29fc70a888\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-7tkh8" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.738026 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c1d808ae-6ef1-4996-8f03-fa3102c0d165-images\") pod \"machine-config-operator-67c9d58cbb-c7wpt\" (UID: \"c1d808ae-6ef1-4996-8f03-fa3102c0d165\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-c7wpt" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.738042 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqbnh\" (UniqueName: \"kubernetes.io/projected/c1d808ae-6ef1-4996-8f03-fa3102c0d165-kube-api-access-qqbnh\") pod \"machine-config-operator-67c9d58cbb-c7wpt\" (UID: \"c1d808ae-6ef1-4996-8f03-fa3102c0d165\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-c7wpt" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.738069 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/2431091b-9e2c-4eb0-993a-a6893cb79df1-etcd-client\") pod \"etcd-operator-69b85846b6-r8lf9\" (UID: \"2431091b-9e2c-4eb0-993a-a6893cb79df1\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-r8lf9" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.738131 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/ea8bf4d2-1a9a-47e3-8013-e422b80a164b-tmpfs\") pod \"olm-operator-5cdf44d969-g7zm7\" (UID: \"ea8bf4d2-1a9a-47e3-8013-e422b80a164b\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-g7zm7" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.738147 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f7722b55-5f8c-4480-8ad6-af633ecad9d2-tmp-dir\") pod \"kube-apiserver-operator-575994946d-6dfcz\" (UID: \"f7722b55-5f8c-4480-8ad6-af633ecad9d2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-6dfcz" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.739484 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vjnn\" (UniqueName: \"kubernetes.io/projected/b84e24d9-6489-40df-a5bc-a2f2e09fcbb7-kube-api-access-9vjnn\") pod \"router-default-68cf44c8b8-lcqd6\" (UID: \"b84e24d9-6489-40df-a5bc-a2f2e09fcbb7\") " pod="openshift-ingress/router-default-68cf44c8b8-lcqd6" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.739510 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/378c9b4b-6598-489c-9af4-b776c79341f6-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-l5t96\" (UID: \"378c9b4b-6598-489c-9af4-b776c79341f6\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-l5t96" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.739525 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ea8bf4d2-1a9a-47e3-8013-e422b80a164b-profile-collector-cert\") pod \"olm-operator-5cdf44d969-g7zm7\" (UID: \"ea8bf4d2-1a9a-47e3-8013-e422b80a164b\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-g7zm7" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.739540 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/72d68d2a-4326-4f35-b609-3d29fc70a888-serving-cert\") pod \"service-ca-operator-5b9c976747-7tkh8\" (UID: \"72d68d2a-4326-4f35-b609-3d29fc70a888\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-7tkh8" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.739558 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqzmp\" (UniqueName: \"kubernetes.io/projected/e91b78a2-592a-42d0-af84-eb81d184bfd7-kube-api-access-wqzmp\") pod \"package-server-manager-77f986bd66-bjwlp\" (UID: \"e91b78a2-592a-42d0-af84-eb81d184bfd7\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-bjwlp" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.739588 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c1d808ae-6ef1-4996-8f03-fa3102c0d165-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-c7wpt\" (UID: \"c1d808ae-6ef1-4996-8f03-fa3102c0d165\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-c7wpt" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.739608 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7722b55-5f8c-4480-8ad6-af633ecad9d2-config\") pod \"kube-apiserver-operator-575994946d-6dfcz\" (UID: \"f7722b55-5f8c-4480-8ad6-af633ecad9d2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-6dfcz" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.739659 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/2431091b-9e2c-4eb0-993a-a6893cb79df1-etcd-service-ca\") pod \"etcd-operator-69b85846b6-r8lf9\" (UID: \"2431091b-9e2c-4eb0-993a-a6893cb79df1\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-r8lf9" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.739674 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/02ff303a-02e9-48e1-a9c5-919ddbc4988a-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-8gpql\" (UID: \"02ff303a-02e9-48e1-a9c5-919ddbc4988a\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-8gpql" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.739690 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/02ff303a-02e9-48e1-a9c5-919ddbc4988a-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-8gpql\" (UID: \"02ff303a-02e9-48e1-a9c5-919ddbc4988a\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-8gpql" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.739705 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/378c9b4b-6598-489c-9af4-b776c79341f6-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-l5t96\" (UID: \"378c9b4b-6598-489c-9af4-b776c79341f6\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-l5t96" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.739747 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/1000c2c0-f2b5-4e8b-b0c8-58ec52b524b1-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-qwvxc\" (UID: \"1000c2c0-f2b5-4e8b-b0c8-58ec52b524b1\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-qwvxc" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.739764 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zl87b\" (UniqueName: \"kubernetes.io/projected/1000c2c0-f2b5-4e8b-b0c8-58ec52b524b1-kube-api-access-zl87b\") pod \"cluster-image-registry-operator-86c45576b9-qwvxc\" (UID: \"1000c2c0-f2b5-4e8b-b0c8-58ec52b524b1\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-qwvxc" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.739795 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/b84e24d9-6489-40df-a5bc-a2f2e09fcbb7-default-certificate\") pod \"router-default-68cf44c8b8-lcqd6\" (UID: \"b84e24d9-6489-40df-a5bc-a2f2e09fcbb7\") " pod="openshift-ingress/router-default-68cf44c8b8-lcqd6" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.739810 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6dmw\" (UniqueName: \"kubernetes.io/projected/72d68d2a-4326-4f35-b609-3d29fc70a888-kube-api-access-l6dmw\") pod \"service-ca-operator-5b9c976747-7tkh8\" (UID: \"72d68d2a-4326-4f35-b609-3d29fc70a888\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-7tkh8" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.739840 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b84e24d9-6489-40df-a5bc-a2f2e09fcbb7-service-ca-bundle\") pod \"router-default-68cf44c8b8-lcqd6\" (UID: \"b84e24d9-6489-40df-a5bc-a2f2e09fcbb7\") " pod="openshift-ingress/router-default-68cf44c8b8-lcqd6" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.739921 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z44px\" (UniqueName: \"kubernetes.io/projected/ea8bf4d2-1a9a-47e3-8013-e422b80a164b-kube-api-access-z44px\") pod \"olm-operator-5cdf44d969-g7zm7\" (UID: \"ea8bf4d2-1a9a-47e3-8013-e422b80a164b\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-g7zm7" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.739937 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/e91b78a2-592a-42d0-af84-eb81d184bfd7-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-bjwlp\" (UID: \"e91b78a2-592a-42d0-af84-eb81d184bfd7\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-bjwlp" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.739961 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hdk9b\" (UID: \"78c92fa7-6dbe-4fef-8495-6dc6fe162b22\") " pod="openshift-image-registry/image-registry-66587d64c8-hdk9b" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.739983 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/cf226324-c47f-4cdc-9d31-85a1295236a5-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-cljsg\" (UID: \"cf226324-c47f-4cdc-9d31-85a1295236a5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-cljsg" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.740002 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c45ee4a1-ebdf-4a47-853b-9cae5ac88246-webhook-certs\") pod \"multus-admission-controller-69db94689b-k6vgk\" (UID: \"c45ee4a1-ebdf-4a47-853b-9cae5ac88246\") " pod="openshift-multus/multus-admission-controller-69db94689b-k6vgk" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.740018 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nl9v\" (UniqueName: \"kubernetes.io/projected/378c9b4b-6598-489c-9af4-b776c79341f6-kube-api-access-7nl9v\") pod \"marketplace-operator-547dbd544d-l5t96\" (UID: \"378c9b4b-6598-489c-9af4-b776c79341f6\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-l5t96" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.740064 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v78jg\" (UniqueName: \"kubernetes.io/projected/2431091b-9e2c-4eb0-993a-a6893cb79df1-kube-api-access-v78jg\") pod \"etcd-operator-69b85846b6-r8lf9\" (UID: \"2431091b-9e2c-4eb0-993a-a6893cb79df1\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-r8lf9" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.740096 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hd8ks\" (UniqueName: \"kubernetes.io/projected/c45ee4a1-ebdf-4a47-853b-9cae5ac88246-kube-api-access-hd8ks\") pod \"multus-admission-controller-69db94689b-k6vgk\" (UID: \"c45ee4a1-ebdf-4a47-853b-9cae5ac88246\") " pod="openshift-multus/multus-admission-controller-69db94689b-k6vgk" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.740127 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2431091b-9e2c-4eb0-993a-a6893cb79df1-serving-cert\") pod \"etcd-operator-69b85846b6-r8lf9\" (UID: \"2431091b-9e2c-4eb0-993a-a6893cb79df1\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-r8lf9" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.740146 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cf226324-c47f-4cdc-9d31-85a1295236a5-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-cljsg\" (UID: \"cf226324-c47f-4cdc-9d31-85a1295236a5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-cljsg" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.740189 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vntlm\" (UniqueName: \"kubernetes.io/projected/5bcda332-2469-4e96-911f-6091f5ab33ee-kube-api-access-vntlm\") pod \"control-plane-machine-set-operator-75ffdb6fcd-77nhq\" (UID: \"5bcda332-2469-4e96-911f-6091f5ab33ee\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-77nhq" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.740205 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81011d2a-11c4-491a-beeb-7af27e94f011-config\") pod \"kube-controller-manager-operator-69d5f845f8-hzphx\" (UID: \"81011d2a-11c4-491a-beeb-7af27e94f011\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-hzphx" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.740221 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cf226324-c47f-4cdc-9d31-85a1295236a5-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-cljsg\" (UID: \"cf226324-c47f-4cdc-9d31-85a1295236a5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-cljsg" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.740255 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf226324-c47f-4cdc-9d31-85a1295236a5-config\") pod \"openshift-kube-scheduler-operator-54f497555d-cljsg\" (UID: \"cf226324-c47f-4cdc-9d31-85a1295236a5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-cljsg" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.740397 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdwf2\" (UniqueName: \"kubernetes.io/projected/e8da381c-e396-48dd-a445-40d51d56858d-kube-api-access-zdwf2\") pod \"collect-profiles-29425800-msrsb\" (UID: \"e8da381c-e396-48dd-a445-40d51d56858d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425800-msrsb" Dec 12 14:12:36 crc kubenswrapper[5108]: E1212 14:12:36.741661 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:37.241646757 +0000 UTC m=+110.149637916 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hdk9b" (UID: "78c92fa7-6dbe-4fef-8495-6dc6fe162b22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.747627 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-zsrsd"] Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.824731 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-747b44746d-hrj8v"] Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.841396 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.841540 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e8da381c-e396-48dd-a445-40d51d56858d-config-volume\") pod \"collect-profiles-29425800-msrsb\" (UID: \"e8da381c-e396-48dd-a445-40d51d56858d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425800-msrsb" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.841565 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/0222cddf-6d62-4ba5-924b-f1f64d892c84-tmp-dir\") pod \"dns-default-c8mpt\" (UID: \"0222cddf-6d62-4ba5-924b-f1f64d892c84\") " pod="openshift-dns/dns-default-c8mpt" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.841593 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/81011d2a-11c4-491a-beeb-7af27e94f011-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-hzphx\" (UID: \"81011d2a-11c4-491a-beeb-7af27e94f011\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-hzphx" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.841613 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/2431091b-9e2c-4eb0-993a-a6893cb79df1-etcd-ca\") pod \"etcd-operator-69b85846b6-r8lf9\" (UID: \"2431091b-9e2c-4eb0-993a-a6893cb79df1\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-r8lf9" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.841639 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e8da381c-e396-48dd-a445-40d51d56858d-secret-volume\") pod \"collect-profiles-29425800-msrsb\" (UID: \"e8da381c-e396-48dd-a445-40d51d56858d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425800-msrsb" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.841656 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/5bcda332-2469-4e96-911f-6091f5ab33ee-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-77nhq\" (UID: \"5bcda332-2469-4e96-911f-6091f5ab33ee\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-77nhq" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.841674 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-nlpcg\" (UniqueName: \"kubernetes.io/projected/02ff303a-02e9-48e1-a9c5-919ddbc4988a-kube-api-access-nlpcg\") pod \"machine-config-controller-f9cdd68f7-8gpql\" (UID: \"02ff303a-02e9-48e1-a9c5-919ddbc4988a\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-8gpql" Dec 12 14:12:36 crc kubenswrapper[5108]: E1212 14:12:36.841720 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:37.341683102 +0000 UTC m=+110.249674261 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.841770 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0222cddf-6d62-4ba5-924b-f1f64d892c84-config-volume\") pod \"dns-default-c8mpt\" (UID: \"0222cddf-6d62-4ba5-924b-f1f64d892c84\") " pod="openshift-dns/dns-default-c8mpt" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.841855 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1000c2c0-f2b5-4e8b-b0c8-58ec52b524b1-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-qwvxc\" (UID: \"1000c2c0-f2b5-4e8b-b0c8-58ec52b524b1\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-qwvxc" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.841885 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f7426742-3adb-48e6-be7c-375e4860babe-tmp-dir\") pod \"dns-operator-799b87ffcd-87mjz\" (UID: \"f7426742-3adb-48e6-be7c-375e4860babe\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-87mjz" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.841906 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f7722b55-5f8c-4480-8ad6-af633ecad9d2-serving-cert\") pod \"kube-apiserver-operator-575994946d-6dfcz\" (UID: \"f7722b55-5f8c-4480-8ad6-af633ecad9d2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-6dfcz" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.841949 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ea8bf4d2-1a9a-47e3-8013-e422b80a164b-srv-cert\") pod \"olm-operator-5cdf44d969-g7zm7\" (UID: \"ea8bf4d2-1a9a-47e3-8013-e422b80a164b\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-g7zm7" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.841982 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/c37b74b9-0534-4df9-9af9-a1c10e3a9b89-plugins-dir\") pod \"csi-hostpathplugin-gw5g9\" (UID: \"c37b74b9-0534-4df9-9af9-a1c10e3a9b89\") " pod="hostpath-provisioner/csi-hostpathplugin-gw5g9" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.842008 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c1d808ae-6ef1-4996-8f03-fa3102c0d165-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-c7wpt\" (UID: \"c1d808ae-6ef1-4996-8f03-fa3102c0d165\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-c7wpt" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.842038 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81011d2a-11c4-491a-beeb-7af27e94f011-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-hzphx\" (UID: \"81011d2a-11c4-491a-beeb-7af27e94f011\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-hzphx" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.842422 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2g7pc\" (UniqueName: \"kubernetes.io/projected/0222cddf-6d62-4ba5-924b-f1f64d892c84-kube-api-access-2g7pc\") pod \"dns-default-c8mpt\" (UID: \"0222cddf-6d62-4ba5-924b-f1f64d892c84\") " pod="openshift-dns/dns-default-c8mpt" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.842489 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2431091b-9e2c-4eb0-993a-a6893cb79df1-config\") pod \"etcd-operator-69b85846b6-r8lf9\" (UID: \"2431091b-9e2c-4eb0-993a-a6893cb79df1\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-r8lf9" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.842513 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f7722b55-5f8c-4480-8ad6-af633ecad9d2-kube-api-access\") pod \"kube-apiserver-operator-575994946d-6dfcz\" (UID: \"f7722b55-5f8c-4480-8ad6-af633ecad9d2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-6dfcz" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.842538 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdlhs\" (UniqueName: \"kubernetes.io/projected/e002b2da-aaa6-4bf5-93a0-0d08a9467038-kube-api-access-vdlhs\") pod \"service-ca-74545575db-gdjh7\" (UID: \"e002b2da-aaa6-4bf5-93a0-0d08a9467038\") " pod="openshift-service-ca/service-ca-74545575db-gdjh7" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.842573 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c37b74b9-0534-4df9-9af9-a1c10e3a9b89-registration-dir\") pod \"csi-hostpathplugin-gw5g9\" (UID: \"c37b74b9-0534-4df9-9af9-a1c10e3a9b89\") " pod="hostpath-provisioner/csi-hostpathplugin-gw5g9" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.842622 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/81011d2a-11c4-491a-beeb-7af27e94f011-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-hzphx\" (UID: \"81011d2a-11c4-491a-beeb-7af27e94f011\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-hzphx" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.842650 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/b84e24d9-6489-40df-a5bc-a2f2e09fcbb7-stats-auth\") pod \"router-default-68cf44c8b8-lcqd6\" (UID: \"b84e24d9-6489-40df-a5bc-a2f2e09fcbb7\") " pod="openshift-ingress/router-default-68cf44c8b8-lcqd6" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.842675 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1000c2c0-f2b5-4e8b-b0c8-58ec52b524b1-tmp\") pod \"cluster-image-registry-operator-86c45576b9-qwvxc\" (UID: \"1000c2c0-f2b5-4e8b-b0c8-58ec52b524b1\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-qwvxc" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.842704 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/1000c2c0-f2b5-4e8b-b0c8-58ec52b524b1-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-qwvxc\" (UID: \"1000c2c0-f2b5-4e8b-b0c8-58ec52b524b1\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-qwvxc" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.842729 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xlmf\" (UniqueName: \"kubernetes.io/projected/c21ce8a6-6a8e-4959-88ec-ebef7caa2a78-kube-api-access-8xlmf\") pod \"machine-config-server-f6zss\" (UID: \"c21ce8a6-6a8e-4959-88ec-ebef7caa2a78\") " pod="openshift-machine-config-operator/machine-config-server-f6zss" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.842758 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/c21ce8a6-6a8e-4959-88ec-ebef7caa2a78-node-bootstrap-token\") pod \"machine-config-server-f6zss\" (UID: \"c21ce8a6-6a8e-4959-88ec-ebef7caa2a78\") " pod="openshift-machine-config-operator/machine-config-server-f6zss" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.842787 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/2431091b-9e2c-4eb0-993a-a6893cb79df1-tmp-dir\") pod \"etcd-operator-69b85846b6-r8lf9\" (UID: \"2431091b-9e2c-4eb0-993a-a6893cb79df1\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-r8lf9" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.842810 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b84e24d9-6489-40df-a5bc-a2f2e09fcbb7-metrics-certs\") pod \"router-default-68cf44c8b8-lcqd6\" (UID: \"b84e24d9-6489-40df-a5bc-a2f2e09fcbb7\") " pod="openshift-ingress/router-default-68cf44c8b8-lcqd6" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.842840 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1000c2c0-f2b5-4e8b-b0c8-58ec52b524b1-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-qwvxc\" (UID: \"1000c2c0-f2b5-4e8b-b0c8-58ec52b524b1\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-qwvxc" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.842905 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/378c9b4b-6598-489c-9af4-b776c79341f6-tmp\") pod \"marketplace-operator-547dbd544d-l5t96\" (UID: \"378c9b4b-6598-489c-9af4-b776c79341f6\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-l5t96" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.842929 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/72d68d2a-4326-4f35-b609-3d29fc70a888-config\") pod \"service-ca-operator-5b9c976747-7tkh8\" (UID: \"72d68d2a-4326-4f35-b609-3d29fc70a888\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-7tkh8" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.842951 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c1d808ae-6ef1-4996-8f03-fa3102c0d165-images\") pod \"machine-config-operator-67c9d58cbb-c7wpt\" (UID: \"c1d808ae-6ef1-4996-8f03-fa3102c0d165\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-c7wpt" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.842976 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qqbnh\" (UniqueName: \"kubernetes.io/projected/c1d808ae-6ef1-4996-8f03-fa3102c0d165-kube-api-access-qqbnh\") pod \"machine-config-operator-67c9d58cbb-c7wpt\" (UID: \"c1d808ae-6ef1-4996-8f03-fa3102c0d165\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-c7wpt" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.843001 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/13a39891-aaca-425b-adee-d73d8c5f7bcd-tmpfs\") pod \"catalog-operator-75ff9f647d-zlssf\" (UID: \"13a39891-aaca-425b-adee-d73d8c5f7bcd\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-zlssf" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.843028 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvfzh\" (UniqueName: \"kubernetes.io/projected/fab2bff9-a63d-4213-b55b-c19d14831aa5-kube-api-access-xvfzh\") pod \"cni-sysctl-allowlist-ds-9bn92\" (UID: \"fab2bff9-a63d-4213-b55b-c19d14831aa5\") " pod="openshift-multus/cni-sysctl-allowlist-ds-9bn92" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.843052 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/77db1b7a-74ff-4cbd-aacf-ac295b36d84c-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-8rxm9\" (UID: \"77db1b7a-74ff-4cbd-aacf-ac295b36d84c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-8rxm9" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.843282 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c1d808ae-6ef1-4996-8f03-fa3102c0d165-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-c7wpt\" (UID: \"c1d808ae-6ef1-4996-8f03-fa3102c0d165\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-c7wpt" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.844219 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2431091b-9e2c-4eb0-993a-a6893cb79df1-config\") pod \"etcd-operator-69b85846b6-r8lf9\" (UID: \"2431091b-9e2c-4eb0-993a-a6893cb79df1\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-r8lf9" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.844837 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/81011d2a-11c4-491a-beeb-7af27e94f011-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-hzphx\" (UID: \"81011d2a-11c4-491a-beeb-7af27e94f011\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-hzphx" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.845429 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/2431091b-9e2c-4eb0-993a-a6893cb79df1-etcd-client\") pod \"etcd-operator-69b85846b6-r8lf9\" (UID: \"2431091b-9e2c-4eb0-993a-a6893cb79df1\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-r8lf9" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.845472 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/ea8bf4d2-1a9a-47e3-8013-e422b80a164b-tmpfs\") pod \"olm-operator-5cdf44d969-g7zm7\" (UID: \"ea8bf4d2-1a9a-47e3-8013-e422b80a164b\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-g7zm7" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.845497 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f7722b55-5f8c-4480-8ad6-af633ecad9d2-tmp-dir\") pod \"kube-apiserver-operator-575994946d-6dfcz\" (UID: \"f7722b55-5f8c-4480-8ad6-af633ecad9d2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-6dfcz" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.845539 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9vjnn\" (UniqueName: \"kubernetes.io/projected/b84e24d9-6489-40df-a5bc-a2f2e09fcbb7-kube-api-access-9vjnn\") pod \"router-default-68cf44c8b8-lcqd6\" (UID: \"b84e24d9-6489-40df-a5bc-a2f2e09fcbb7\") " pod="openshift-ingress/router-default-68cf44c8b8-lcqd6" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.845567 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/378c9b4b-6598-489c-9af4-b776c79341f6-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-l5t96\" (UID: \"378c9b4b-6598-489c-9af4-b776c79341f6\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-l5t96" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.845588 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ea8bf4d2-1a9a-47e3-8013-e422b80a164b-profile-collector-cert\") pod \"olm-operator-5cdf44d969-g7zm7\" (UID: \"ea8bf4d2-1a9a-47e3-8013-e422b80a164b\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-g7zm7" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.845609 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/72d68d2a-4326-4f35-b609-3d29fc70a888-serving-cert\") pod \"service-ca-operator-5b9c976747-7tkh8\" (UID: \"72d68d2a-4326-4f35-b609-3d29fc70a888\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-7tkh8" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.845634 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wqzmp\" (UniqueName: \"kubernetes.io/projected/e91b78a2-592a-42d0-af84-eb81d184bfd7-kube-api-access-wqzmp\") pod \"package-server-manager-77f986bd66-bjwlp\" (UID: \"e91b78a2-592a-42d0-af84-eb81d184bfd7\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-bjwlp" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.845660 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c1d808ae-6ef1-4996-8f03-fa3102c0d165-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-c7wpt\" (UID: \"c1d808ae-6ef1-4996-8f03-fa3102c0d165\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-c7wpt" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.845684 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/fab2bff9-a63d-4213-b55b-c19d14831aa5-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-9bn92\" (UID: \"fab2bff9-a63d-4213-b55b-c19d14831aa5\") " pod="openshift-multus/cni-sysctl-allowlist-ds-9bn92" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.845715 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7722b55-5f8c-4480-8ad6-af633ecad9d2-config\") pod \"kube-apiserver-operator-575994946d-6dfcz\" (UID: \"f7722b55-5f8c-4480-8ad6-af633ecad9d2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-6dfcz" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.845735 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/aa7e1be7-f229-433f-812a-b47e2c151895-cert\") pod \"ingress-canary-fcvr7\" (UID: \"aa7e1be7-f229-433f-812a-b47e2c151895\") " pod="openshift-ingress-canary/ingress-canary-fcvr7" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.845774 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/2431091b-9e2c-4eb0-993a-a6893cb79df1-etcd-service-ca\") pod \"etcd-operator-69b85846b6-r8lf9\" (UID: \"2431091b-9e2c-4eb0-993a-a6893cb79df1\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-r8lf9" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.845796 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/02ff303a-02e9-48e1-a9c5-919ddbc4988a-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-8gpql\" (UID: \"02ff303a-02e9-48e1-a9c5-919ddbc4988a\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-8gpql" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.845820 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/02ff303a-02e9-48e1-a9c5-919ddbc4988a-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-8gpql\" (UID: \"02ff303a-02e9-48e1-a9c5-919ddbc4988a\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-8gpql" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.845843 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/378c9b4b-6598-489c-9af4-b776c79341f6-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-l5t96\" (UID: \"378c9b4b-6598-489c-9af4-b776c79341f6\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-l5t96" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.845866 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/c21ce8a6-6a8e-4959-88ec-ebef7caa2a78-certs\") pod \"machine-config-server-f6zss\" (UID: \"c21ce8a6-6a8e-4959-88ec-ebef7caa2a78\") " pod="openshift-machine-config-operator/machine-config-server-f6zss" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.845961 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/1000c2c0-f2b5-4e8b-b0c8-58ec52b524b1-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-qwvxc\" (UID: \"1000c2c0-f2b5-4e8b-b0c8-58ec52b524b1\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-qwvxc" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.845989 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zl87b\" (UniqueName: \"kubernetes.io/projected/1000c2c0-f2b5-4e8b-b0c8-58ec52b524b1-kube-api-access-zl87b\") pod \"cluster-image-registry-operator-86c45576b9-qwvxc\" (UID: \"1000c2c0-f2b5-4e8b-b0c8-58ec52b524b1\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-qwvxc" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.846023 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/b84e24d9-6489-40df-a5bc-a2f2e09fcbb7-default-certificate\") pod \"router-default-68cf44c8b8-lcqd6\" (UID: \"b84e24d9-6489-40df-a5bc-a2f2e09fcbb7\") " pod="openshift-ingress/router-default-68cf44c8b8-lcqd6" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.846045 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l6dmw\" (UniqueName: \"kubernetes.io/projected/72d68d2a-4326-4f35-b609-3d29fc70a888-kube-api-access-l6dmw\") pod \"service-ca-operator-5b9c976747-7tkh8\" (UID: \"72d68d2a-4326-4f35-b609-3d29fc70a888\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-7tkh8" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.846070 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/c37b74b9-0534-4df9-9af9-a1c10e3a9b89-csi-data-dir\") pod \"csi-hostpathplugin-gw5g9\" (UID: \"c37b74b9-0534-4df9-9af9-a1c10e3a9b89\") " pod="hostpath-provisioner/csi-hostpathplugin-gw5g9" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.846155 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdhkn\" (UniqueName: \"kubernetes.io/projected/aa7e1be7-f229-433f-812a-b47e2c151895-kube-api-access-vdhkn\") pod \"ingress-canary-fcvr7\" (UID: \"aa7e1be7-f229-433f-812a-b47e2c151895\") " pod="openshift-ingress-canary/ingress-canary-fcvr7" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.846214 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b84e24d9-6489-40df-a5bc-a2f2e09fcbb7-service-ca-bundle\") pod \"router-default-68cf44c8b8-lcqd6\" (UID: \"b84e24d9-6489-40df-a5bc-a2f2e09fcbb7\") " pod="openshift-ingress/router-default-68cf44c8b8-lcqd6" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.846239 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/13a39891-aaca-425b-adee-d73d8c5f7bcd-srv-cert\") pod \"catalog-operator-75ff9f647d-zlssf\" (UID: \"13a39891-aaca-425b-adee-d73d8c5f7bcd\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-zlssf" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.846265 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbbfq\" (UniqueName: \"kubernetes.io/projected/77db1b7a-74ff-4cbd-aacf-ac295b36d84c-kube-api-access-nbbfq\") pod \"openshift-controller-manager-operator-686468bdd5-8rxm9\" (UID: \"77db1b7a-74ff-4cbd-aacf-ac295b36d84c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-8rxm9" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.846354 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-z44px\" (UniqueName: \"kubernetes.io/projected/ea8bf4d2-1a9a-47e3-8013-e422b80a164b-kube-api-access-z44px\") pod \"olm-operator-5cdf44d969-g7zm7\" (UID: \"ea8bf4d2-1a9a-47e3-8013-e422b80a164b\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-g7zm7" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.846378 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/fab2bff9-a63d-4213-b55b-c19d14831aa5-ready\") pod \"cni-sysctl-allowlist-ds-9bn92\" (UID: \"fab2bff9-a63d-4213-b55b-c19d14831aa5\") " pod="openshift-multus/cni-sysctl-allowlist-ds-9bn92" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.846401 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/c37b74b9-0534-4df9-9af9-a1c10e3a9b89-mountpoint-dir\") pod \"csi-hostpathplugin-gw5g9\" (UID: \"c37b74b9-0534-4df9-9af9-a1c10e3a9b89\") " pod="hostpath-provisioner/csi-hostpathplugin-gw5g9" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.846422 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77db1b7a-74ff-4cbd-aacf-ac295b36d84c-config\") pod \"openshift-controller-manager-operator-686468bdd5-8rxm9\" (UID: \"77db1b7a-74ff-4cbd-aacf-ac295b36d84c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-8rxm9" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.846446 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/e91b78a2-592a-42d0-af84-eb81d184bfd7-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-bjwlp\" (UID: \"e91b78a2-592a-42d0-af84-eb81d184bfd7\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-bjwlp" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.846480 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hdk9b\" (UID: \"78c92fa7-6dbe-4fef-8495-6dc6fe162b22\") " pod="openshift-image-registry/image-registry-66587d64c8-hdk9b" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.846513 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/cf226324-c47f-4cdc-9d31-85a1295236a5-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-cljsg\" (UID: \"cf226324-c47f-4cdc-9d31-85a1295236a5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-cljsg" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.846537 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/e002b2da-aaa6-4bf5-93a0-0d08a9467038-signing-key\") pod \"service-ca-74545575db-gdjh7\" (UID: \"e002b2da-aaa6-4bf5-93a0-0d08a9467038\") " pod="openshift-service-ca/service-ca-74545575db-gdjh7" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.846566 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c45ee4a1-ebdf-4a47-853b-9cae5ac88246-webhook-certs\") pod \"multus-admission-controller-69db94689b-k6vgk\" (UID: \"c45ee4a1-ebdf-4a47-853b-9cae5ac88246\") " pod="openshift-multus/multus-admission-controller-69db94689b-k6vgk" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.846600 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7nl9v\" (UniqueName: \"kubernetes.io/projected/378c9b4b-6598-489c-9af4-b776c79341f6-kube-api-access-7nl9v\") pod \"marketplace-operator-547dbd544d-l5t96\" (UID: \"378c9b4b-6598-489c-9af4-b776c79341f6\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-l5t96" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.846629 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/77db1b7a-74ff-4cbd-aacf-ac295b36d84c-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-8rxm9\" (UID: \"77db1b7a-74ff-4cbd-aacf-ac295b36d84c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-8rxm9" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.846683 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-v78jg\" (UniqueName: \"kubernetes.io/projected/2431091b-9e2c-4eb0-993a-a6893cb79df1-kube-api-access-v78jg\") pod \"etcd-operator-69b85846b6-r8lf9\" (UID: \"2431091b-9e2c-4eb0-993a-a6893cb79df1\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-r8lf9" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.846707 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hd8ks\" (UniqueName: \"kubernetes.io/projected/c45ee4a1-ebdf-4a47-853b-9cae5ac88246-kube-api-access-hd8ks\") pod \"multus-admission-controller-69db94689b-k6vgk\" (UID: \"c45ee4a1-ebdf-4a47-853b-9cae5ac88246\") " pod="openshift-multus/multus-admission-controller-69db94689b-k6vgk" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.846730 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grzfm\" (UniqueName: \"kubernetes.io/projected/c37b74b9-0534-4df9-9af9-a1c10e3a9b89-kube-api-access-grzfm\") pod \"csi-hostpathplugin-gw5g9\" (UID: \"c37b74b9-0534-4df9-9af9-a1c10e3a9b89\") " pod="hostpath-provisioner/csi-hostpathplugin-gw5g9" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.846777 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2431091b-9e2c-4eb0-993a-a6893cb79df1-serving-cert\") pod \"etcd-operator-69b85846b6-r8lf9\" (UID: \"2431091b-9e2c-4eb0-993a-a6893cb79df1\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-r8lf9" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.846804 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpxwn\" (UniqueName: \"kubernetes.io/projected/13a39891-aaca-425b-adee-d73d8c5f7bcd-kube-api-access-jpxwn\") pod \"catalog-operator-75ff9f647d-zlssf\" (UID: \"13a39891-aaca-425b-adee-d73d8c5f7bcd\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-zlssf" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.846833 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cf226324-c47f-4cdc-9d31-85a1295236a5-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-cljsg\" (UID: \"cf226324-c47f-4cdc-9d31-85a1295236a5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-cljsg" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.846855 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0222cddf-6d62-4ba5-924b-f1f64d892c84-metrics-tls\") pod \"dns-default-c8mpt\" (UID: \"0222cddf-6d62-4ba5-924b-f1f64d892c84\") " pod="openshift-dns/dns-default-c8mpt" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.846878 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vntlm\" (UniqueName: \"kubernetes.io/projected/5bcda332-2469-4e96-911f-6091f5ab33ee-kube-api-access-vntlm\") pod \"control-plane-machine-set-operator-75ffdb6fcd-77nhq\" (UID: \"5bcda332-2469-4e96-911f-6091f5ab33ee\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-77nhq" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.846901 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81011d2a-11c4-491a-beeb-7af27e94f011-config\") pod \"kube-controller-manager-operator-69d5f845f8-hzphx\" (UID: \"81011d2a-11c4-491a-beeb-7af27e94f011\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-hzphx" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.846925 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cf226324-c47f-4cdc-9d31-85a1295236a5-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-cljsg\" (UID: \"cf226324-c47f-4cdc-9d31-85a1295236a5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-cljsg" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.846948 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/e002b2da-aaa6-4bf5-93a0-0d08a9467038-signing-cabundle\") pod \"service-ca-74545575db-gdjh7\" (UID: \"e002b2da-aaa6-4bf5-93a0-0d08a9467038\") " pod="openshift-service-ca/service-ca-74545575db-gdjh7" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.846986 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf226324-c47f-4cdc-9d31-85a1295236a5-config\") pod \"openshift-kube-scheduler-operator-54f497555d-cljsg\" (UID: \"cf226324-c47f-4cdc-9d31-85a1295236a5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-cljsg" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.847012 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zdwf2\" (UniqueName: \"kubernetes.io/projected/e8da381c-e396-48dd-a445-40d51d56858d-kube-api-access-zdwf2\") pod \"collect-profiles-29425800-msrsb\" (UID: \"e8da381c-e396-48dd-a445-40d51d56858d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425800-msrsb" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.847037 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/fab2bff9-a63d-4213-b55b-c19d14831aa5-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-9bn92\" (UID: \"fab2bff9-a63d-4213-b55b-c19d14831aa5\") " pod="openshift-multus/cni-sysctl-allowlist-ds-9bn92" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.847101 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f7426742-3adb-48e6-be7c-375e4860babe-metrics-tls\") pod \"dns-operator-799b87ffcd-87mjz\" (UID: \"f7426742-3adb-48e6-be7c-375e4860babe\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-87mjz" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.847127 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c37b74b9-0534-4df9-9af9-a1c10e3a9b89-socket-dir\") pod \"csi-hostpathplugin-gw5g9\" (UID: \"c37b74b9-0534-4df9-9af9-a1c10e3a9b89\") " pod="hostpath-provisioner/csi-hostpathplugin-gw5g9" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.847153 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/13a39891-aaca-425b-adee-d73d8c5f7bcd-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-zlssf\" (UID: \"13a39891-aaca-425b-adee-d73d8c5f7bcd\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-zlssf" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.847207 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-f4zc2\" (UniqueName: \"kubernetes.io/projected/f7426742-3adb-48e6-be7c-375e4860babe-kube-api-access-f4zc2\") pod \"dns-operator-799b87ffcd-87mjz\" (UID: \"f7426742-3adb-48e6-be7c-375e4860babe\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-87mjz" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.847688 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/2431091b-9e2c-4eb0-993a-a6893cb79df1-tmp-dir\") pod \"etcd-operator-69b85846b6-r8lf9\" (UID: \"2431091b-9e2c-4eb0-993a-a6893cb79df1\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-r8lf9" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.848030 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/1000c2c0-f2b5-4e8b-b0c8-58ec52b524b1-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-qwvxc\" (UID: \"1000c2c0-f2b5-4e8b-b0c8-58ec52b524b1\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-qwvxc" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.848928 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1000c2c0-f2b5-4e8b-b0c8-58ec52b524b1-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-qwvxc\" (UID: \"1000c2c0-f2b5-4e8b-b0c8-58ec52b524b1\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-qwvxc" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.849367 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/378c9b4b-6598-489c-9af4-b776c79341f6-tmp\") pod \"marketplace-operator-547dbd544d-l5t96\" (UID: \"378c9b4b-6598-489c-9af4-b776c79341f6\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-l5t96" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.849799 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/72d68d2a-4326-4f35-b609-3d29fc70a888-config\") pod \"service-ca-operator-5b9c976747-7tkh8\" (UID: \"72d68d2a-4326-4f35-b609-3d29fc70a888\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-7tkh8" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.850225 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1000c2c0-f2b5-4e8b-b0c8-58ec52b524b1-tmp\") pod \"cluster-image-registry-operator-86c45576b9-qwvxc\" (UID: \"1000c2c0-f2b5-4e8b-b0c8-58ec52b524b1\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-qwvxc" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.853298 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c1d808ae-6ef1-4996-8f03-fa3102c0d165-images\") pod \"machine-config-operator-67c9d58cbb-c7wpt\" (UID: \"c1d808ae-6ef1-4996-8f03-fa3102c0d165\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-c7wpt" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.854671 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7722b55-5f8c-4480-8ad6-af633ecad9d2-config\") pod \"kube-apiserver-operator-575994946d-6dfcz\" (UID: \"f7722b55-5f8c-4480-8ad6-af633ecad9d2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-6dfcz" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.855852 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/2431091b-9e2c-4eb0-993a-a6893cb79df1-etcd-service-ca\") pod \"etcd-operator-69b85846b6-r8lf9\" (UID: \"2431091b-9e2c-4eb0-993a-a6893cb79df1\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-r8lf9" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.860784 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/2431091b-9e2c-4eb0-993a-a6893cb79df1-etcd-ca\") pod \"etcd-operator-69b85846b6-r8lf9\" (UID: \"2431091b-9e2c-4eb0-993a-a6893cb79df1\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-r8lf9" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.861036 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/ea8bf4d2-1a9a-47e3-8013-e422b80a164b-tmpfs\") pod \"olm-operator-5cdf44d969-g7zm7\" (UID: \"ea8bf4d2-1a9a-47e3-8013-e422b80a164b\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-g7zm7" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.861177 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f7722b55-5f8c-4480-8ad6-af633ecad9d2-tmp-dir\") pod \"kube-apiserver-operator-575994946d-6dfcz\" (UID: \"f7722b55-5f8c-4480-8ad6-af633ecad9d2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-6dfcz" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.861495 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e8da381c-e396-48dd-a445-40d51d56858d-config-volume\") pod \"collect-profiles-29425800-msrsb\" (UID: \"e8da381c-e396-48dd-a445-40d51d56858d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425800-msrsb" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.861897 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/cf226324-c47f-4cdc-9d31-85a1295236a5-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-cljsg\" (UID: \"cf226324-c47f-4cdc-9d31-85a1295236a5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-cljsg" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.861941 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/378c9b4b-6598-489c-9af4-b776c79341f6-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-l5t96\" (UID: \"378c9b4b-6598-489c-9af4-b776c79341f6\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-l5t96" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.864639 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b84e24d9-6489-40df-a5bc-a2f2e09fcbb7-service-ca-bundle\") pod \"router-default-68cf44c8b8-lcqd6\" (UID: \"b84e24d9-6489-40df-a5bc-a2f2e09fcbb7\") " pod="openshift-ingress/router-default-68cf44c8b8-lcqd6" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.866193 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/02ff303a-02e9-48e1-a9c5-919ddbc4988a-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-8gpql\" (UID: \"02ff303a-02e9-48e1-a9c5-919ddbc4988a\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-8gpql" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.866894 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ea8bf4d2-1a9a-47e3-8013-e422b80a164b-srv-cert\") pod \"olm-operator-5cdf44d969-g7zm7\" (UID: \"ea8bf4d2-1a9a-47e3-8013-e422b80a164b\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-g7zm7" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.872514 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf226324-c47f-4cdc-9d31-85a1295236a5-config\") pod \"openshift-kube-scheduler-operator-54f497555d-cljsg\" (UID: \"cf226324-c47f-4cdc-9d31-85a1295236a5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-cljsg" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.873155 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/5bcda332-2469-4e96-911f-6091f5ab33ee-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-77nhq\" (UID: \"5bcda332-2469-4e96-911f-6091f5ab33ee\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-77nhq" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.873566 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f7426742-3adb-48e6-be7c-375e4860babe-tmp-dir\") pod \"dns-operator-799b87ffcd-87mjz\" (UID: \"f7426742-3adb-48e6-be7c-375e4860babe\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-87mjz" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.873904 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/02ff303a-02e9-48e1-a9c5-919ddbc4988a-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-8gpql\" (UID: \"02ff303a-02e9-48e1-a9c5-919ddbc4988a\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-8gpql" Dec 12 14:12:36 crc kubenswrapper[5108]: E1212 14:12:36.874944 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:37.374919025 +0000 UTC m=+110.282910184 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hdk9b" (UID: "78c92fa7-6dbe-4fef-8495-6dc6fe162b22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.875469 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81011d2a-11c4-491a-beeb-7af27e94f011-config\") pod \"kube-controller-manager-operator-69d5f845f8-hzphx\" (UID: \"81011d2a-11c4-491a-beeb-7af27e94f011\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-hzphx" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.875738 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b84e24d9-6489-40df-a5bc-a2f2e09fcbb7-metrics-certs\") pod \"router-default-68cf44c8b8-lcqd6\" (UID: \"b84e24d9-6489-40df-a5bc-a2f2e09fcbb7\") " pod="openshift-ingress/router-default-68cf44c8b8-lcqd6" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.899219 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e8da381c-e396-48dd-a445-40d51d56858d-secret-volume\") pod \"collect-profiles-29425800-msrsb\" (UID: \"e8da381c-e396-48dd-a445-40d51d56858d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425800-msrsb" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.902699 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/378c9b4b-6598-489c-9af4-b776c79341f6-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-l5t96\" (UID: \"378c9b4b-6598-489c-9af4-b776c79341f6\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-l5t96" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.913605 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f7426742-3adb-48e6-be7c-375e4860babe-metrics-tls\") pod \"dns-operator-799b87ffcd-87mjz\" (UID: \"f7426742-3adb-48e6-be7c-375e4860babe\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-87mjz" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.914040 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81011d2a-11c4-491a-beeb-7af27e94f011-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-hzphx\" (UID: \"81011d2a-11c4-491a-beeb-7af27e94f011\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-hzphx" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.914262 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/2431091b-9e2c-4eb0-993a-a6893cb79df1-etcd-client\") pod \"etcd-operator-69b85846b6-r8lf9\" (UID: \"2431091b-9e2c-4eb0-993a-a6893cb79df1\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-r8lf9" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.916528 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f7722b55-5f8c-4480-8ad6-af633ecad9d2-serving-cert\") pod \"kube-apiserver-operator-575994946d-6dfcz\" (UID: \"f7722b55-5f8c-4480-8ad6-af633ecad9d2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-6dfcz" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.917959 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c45ee4a1-ebdf-4a47-853b-9cae5ac88246-webhook-certs\") pod \"multus-admission-controller-69db94689b-k6vgk\" (UID: \"c45ee4a1-ebdf-4a47-853b-9cae5ac88246\") " pod="openshift-multus/multus-admission-controller-69db94689b-k6vgk" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.919163 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/e91b78a2-592a-42d0-af84-eb81d184bfd7-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-bjwlp\" (UID: \"e91b78a2-592a-42d0-af84-eb81d184bfd7\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-bjwlp" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.919712 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-qwk4x" event={"ID":"4149b83c-6a14-4f2f-b097-e59fcb47b122","Type":"ContainerStarted","Data":"788859f3ea765bf3864febaacdef84b9cac3c08be3ee7ca17db6a5e53f74aa36"} Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.919757 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-qwk4x" event={"ID":"4149b83c-6a14-4f2f-b097-e59fcb47b122","Type":"ContainerStarted","Data":"0880d30e799cd23c08ac53bd308e56e2c1008c033c38ad4cebd3232211b3ccd3"} Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.923680 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/b84e24d9-6489-40df-a5bc-a2f2e09fcbb7-stats-auth\") pod \"router-default-68cf44c8b8-lcqd6\" (UID: \"b84e24d9-6489-40df-a5bc-a2f2e09fcbb7\") " pod="openshift-ingress/router-default-68cf44c8b8-lcqd6" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.923696 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/b84e24d9-6489-40df-a5bc-a2f2e09fcbb7-default-certificate\") pod \"router-default-68cf44c8b8-lcqd6\" (UID: \"b84e24d9-6489-40df-a5bc-a2f2e09fcbb7\") " pod="openshift-ingress/router-default-68cf44c8b8-lcqd6" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.925969 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-ztlbz" event={"ID":"8e90fae7-1eff-4924-ba9c-a1325c4099e9","Type":"ContainerStarted","Data":"fbad4c25e68a0cf4dbd0b91776bcf12b8bb119e393f1db4aaf98330bf4d21a6d"} Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.926025 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-ztlbz" event={"ID":"8e90fae7-1eff-4924-ba9c-a1325c4099e9","Type":"ContainerStarted","Data":"794ee4c413626de25e65c6b9b83fa2305cab5aa10871ec96a70742e169691341"} Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.925987 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/1000c2c0-f2b5-4e8b-b0c8-58ec52b524b1-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-qwvxc\" (UID: \"1000c2c0-f2b5-4e8b-b0c8-58ec52b524b1\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-qwvxc" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.927599 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cf226324-c47f-4cdc-9d31-85a1295236a5-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-cljsg\" (UID: \"cf226324-c47f-4cdc-9d31-85a1295236a5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-cljsg" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.928272 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ea8bf4d2-1a9a-47e3-8013-e422b80a164b-profile-collector-cert\") pod \"olm-operator-5cdf44d969-g7zm7\" (UID: \"ea8bf4d2-1a9a-47e3-8013-e422b80a164b\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-g7zm7" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.930689 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/72d68d2a-4326-4f35-b609-3d29fc70a888-serving-cert\") pod \"service-ca-operator-5b9c976747-7tkh8\" (UID: \"72d68d2a-4326-4f35-b609-3d29fc70a888\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-7tkh8" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.931186 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/81011d2a-11c4-491a-beeb-7af27e94f011-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-hzphx\" (UID: \"81011d2a-11c4-491a-beeb-7af27e94f011\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-hzphx" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.933864 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2431091b-9e2c-4eb0-993a-a6893cb79df1-serving-cert\") pod \"etcd-operator-69b85846b6-r8lf9\" (UID: \"2431091b-9e2c-4eb0-993a-a6893cb79df1\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-r8lf9" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.934013 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-nlpcg\" (UniqueName: \"kubernetes.io/projected/02ff303a-02e9-48e1-a9c5-919ddbc4988a-kube-api-access-nlpcg\") pod \"machine-config-controller-f9cdd68f7-8gpql\" (UID: \"02ff303a-02e9-48e1-a9c5-919ddbc4988a\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-8gpql" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.934591 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f7722b55-5f8c-4480-8ad6-af633ecad9d2-kube-api-access\") pod \"kube-apiserver-operator-575994946d-6dfcz\" (UID: \"f7722b55-5f8c-4480-8ad6-af633ecad9d2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-6dfcz" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.935883 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4zc2\" (UniqueName: \"kubernetes.io/projected/f7426742-3adb-48e6-be7c-375e4860babe-kube-api-access-f4zc2\") pod \"dns-operator-799b87ffcd-87mjz\" (UID: \"f7426742-3adb-48e6-be7c-375e4860babe\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-87mjz" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.940932 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-8gpql" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.944900 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c1d808ae-6ef1-4996-8f03-fa3102c0d165-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-c7wpt\" (UID: \"c1d808ae-6ef1-4996-8f03-fa3102c0d165\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-c7wpt" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.948646 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.948823 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c37b74b9-0534-4df9-9af9-a1c10e3a9b89-socket-dir\") pod \"csi-hostpathplugin-gw5g9\" (UID: \"c37b74b9-0534-4df9-9af9-a1c10e3a9b89\") " pod="hostpath-provisioner/csi-hostpathplugin-gw5g9" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.948850 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/13a39891-aaca-425b-adee-d73d8c5f7bcd-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-zlssf\" (UID: \"13a39891-aaca-425b-adee-d73d8c5f7bcd\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-zlssf" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.948897 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/0222cddf-6d62-4ba5-924b-f1f64d892c84-tmp-dir\") pod \"dns-default-c8mpt\" (UID: \"0222cddf-6d62-4ba5-924b-f1f64d892c84\") " pod="openshift-dns/dns-default-c8mpt" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.948932 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0222cddf-6d62-4ba5-924b-f1f64d892c84-config-volume\") pod \"dns-default-c8mpt\" (UID: \"0222cddf-6d62-4ba5-924b-f1f64d892c84\") " pod="openshift-dns/dns-default-c8mpt" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.948957 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/c37b74b9-0534-4df9-9af9-a1c10e3a9b89-plugins-dir\") pod \"csi-hostpathplugin-gw5g9\" (UID: \"c37b74b9-0534-4df9-9af9-a1c10e3a9b89\") " pod="hostpath-provisioner/csi-hostpathplugin-gw5g9" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.948980 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2g7pc\" (UniqueName: \"kubernetes.io/projected/0222cddf-6d62-4ba5-924b-f1f64d892c84-kube-api-access-2g7pc\") pod \"dns-default-c8mpt\" (UID: \"0222cddf-6d62-4ba5-924b-f1f64d892c84\") " pod="openshift-dns/dns-default-c8mpt" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.949011 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vdlhs\" (UniqueName: \"kubernetes.io/projected/e002b2da-aaa6-4bf5-93a0-0d08a9467038-kube-api-access-vdlhs\") pod \"service-ca-74545575db-gdjh7\" (UID: \"e002b2da-aaa6-4bf5-93a0-0d08a9467038\") " pod="openshift-service-ca/service-ca-74545575db-gdjh7" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.949033 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c37b74b9-0534-4df9-9af9-a1c10e3a9b89-registration-dir\") pod \"csi-hostpathplugin-gw5g9\" (UID: \"c37b74b9-0534-4df9-9af9-a1c10e3a9b89\") " pod="hostpath-provisioner/csi-hostpathplugin-gw5g9" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.949057 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8xlmf\" (UniqueName: \"kubernetes.io/projected/c21ce8a6-6a8e-4959-88ec-ebef7caa2a78-kube-api-access-8xlmf\") pod \"machine-config-server-f6zss\" (UID: \"c21ce8a6-6a8e-4959-88ec-ebef7caa2a78\") " pod="openshift-machine-config-operator/machine-config-server-f6zss" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.949094 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/c21ce8a6-6a8e-4959-88ec-ebef7caa2a78-node-bootstrap-token\") pod \"machine-config-server-f6zss\" (UID: \"c21ce8a6-6a8e-4959-88ec-ebef7caa2a78\") " pod="openshift-machine-config-operator/machine-config-server-f6zss" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.949132 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/13a39891-aaca-425b-adee-d73d8c5f7bcd-tmpfs\") pod \"catalog-operator-75ff9f647d-zlssf\" (UID: \"13a39891-aaca-425b-adee-d73d8c5f7bcd\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-zlssf" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.949153 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xvfzh\" (UniqueName: \"kubernetes.io/projected/fab2bff9-a63d-4213-b55b-c19d14831aa5-kube-api-access-xvfzh\") pod \"cni-sysctl-allowlist-ds-9bn92\" (UID: \"fab2bff9-a63d-4213-b55b-c19d14831aa5\") " pod="openshift-multus/cni-sysctl-allowlist-ds-9bn92" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.949171 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/77db1b7a-74ff-4cbd-aacf-ac295b36d84c-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-8rxm9\" (UID: \"77db1b7a-74ff-4cbd-aacf-ac295b36d84c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-8rxm9" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.949203 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/fab2bff9-a63d-4213-b55b-c19d14831aa5-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-9bn92\" (UID: \"fab2bff9-a63d-4213-b55b-c19d14831aa5\") " pod="openshift-multus/cni-sysctl-allowlist-ds-9bn92" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.949225 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/aa7e1be7-f229-433f-812a-b47e2c151895-cert\") pod \"ingress-canary-fcvr7\" (UID: \"aa7e1be7-f229-433f-812a-b47e2c151895\") " pod="openshift-ingress-canary/ingress-canary-fcvr7" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.949249 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/c21ce8a6-6a8e-4959-88ec-ebef7caa2a78-certs\") pod \"machine-config-server-f6zss\" (UID: \"c21ce8a6-6a8e-4959-88ec-ebef7caa2a78\") " pod="openshift-machine-config-operator/machine-config-server-f6zss" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.949275 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/c37b74b9-0534-4df9-9af9-a1c10e3a9b89-csi-data-dir\") pod \"csi-hostpathplugin-gw5g9\" (UID: \"c37b74b9-0534-4df9-9af9-a1c10e3a9b89\") " pod="hostpath-provisioner/csi-hostpathplugin-gw5g9" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.949291 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vdhkn\" (UniqueName: \"kubernetes.io/projected/aa7e1be7-f229-433f-812a-b47e2c151895-kube-api-access-vdhkn\") pod \"ingress-canary-fcvr7\" (UID: \"aa7e1be7-f229-433f-812a-b47e2c151895\") " pod="openshift-ingress-canary/ingress-canary-fcvr7" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.949311 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/13a39891-aaca-425b-adee-d73d8c5f7bcd-srv-cert\") pod \"catalog-operator-75ff9f647d-zlssf\" (UID: \"13a39891-aaca-425b-adee-d73d8c5f7bcd\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-zlssf" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.949329 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-nbbfq\" (UniqueName: \"kubernetes.io/projected/77db1b7a-74ff-4cbd-aacf-ac295b36d84c-kube-api-access-nbbfq\") pod \"openshift-controller-manager-operator-686468bdd5-8rxm9\" (UID: \"77db1b7a-74ff-4cbd-aacf-ac295b36d84c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-8rxm9" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.949363 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/fab2bff9-a63d-4213-b55b-c19d14831aa5-ready\") pod \"cni-sysctl-allowlist-ds-9bn92\" (UID: \"fab2bff9-a63d-4213-b55b-c19d14831aa5\") " pod="openshift-multus/cni-sysctl-allowlist-ds-9bn92" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.949378 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/c37b74b9-0534-4df9-9af9-a1c10e3a9b89-mountpoint-dir\") pod \"csi-hostpathplugin-gw5g9\" (UID: \"c37b74b9-0534-4df9-9af9-a1c10e3a9b89\") " pod="hostpath-provisioner/csi-hostpathplugin-gw5g9" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.949398 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77db1b7a-74ff-4cbd-aacf-ac295b36d84c-config\") pod \"openshift-controller-manager-operator-686468bdd5-8rxm9\" (UID: \"77db1b7a-74ff-4cbd-aacf-ac295b36d84c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-8rxm9" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.949421 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/e002b2da-aaa6-4bf5-93a0-0d08a9467038-signing-key\") pod \"service-ca-74545575db-gdjh7\" (UID: \"e002b2da-aaa6-4bf5-93a0-0d08a9467038\") " pod="openshift-service-ca/service-ca-74545575db-gdjh7" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.949440 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/77db1b7a-74ff-4cbd-aacf-ac295b36d84c-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-8rxm9\" (UID: \"77db1b7a-74ff-4cbd-aacf-ac295b36d84c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-8rxm9" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.949478 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-grzfm\" (UniqueName: \"kubernetes.io/projected/c37b74b9-0534-4df9-9af9-a1c10e3a9b89-kube-api-access-grzfm\") pod \"csi-hostpathplugin-gw5g9\" (UID: \"c37b74b9-0534-4df9-9af9-a1c10e3a9b89\") " pod="hostpath-provisioner/csi-hostpathplugin-gw5g9" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.949499 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jpxwn\" (UniqueName: \"kubernetes.io/projected/13a39891-aaca-425b-adee-d73d8c5f7bcd-kube-api-access-jpxwn\") pod \"catalog-operator-75ff9f647d-zlssf\" (UID: \"13a39891-aaca-425b-adee-d73d8c5f7bcd\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-zlssf" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.949518 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0222cddf-6d62-4ba5-924b-f1f64d892c84-metrics-tls\") pod \"dns-default-c8mpt\" (UID: \"0222cddf-6d62-4ba5-924b-f1f64d892c84\") " pod="openshift-dns/dns-default-c8mpt" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.949536 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/e002b2da-aaa6-4bf5-93a0-0d08a9467038-signing-cabundle\") pod \"service-ca-74545575db-gdjh7\" (UID: \"e002b2da-aaa6-4bf5-93a0-0d08a9467038\") " pod="openshift-service-ca/service-ca-74545575db-gdjh7" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.949561 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/fab2bff9-a63d-4213-b55b-c19d14831aa5-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-9bn92\" (UID: \"fab2bff9-a63d-4213-b55b-c19d14831aa5\") " pod="openshift-multus/cni-sysctl-allowlist-ds-9bn92" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.949992 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/fab2bff9-a63d-4213-b55b-c19d14831aa5-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-9bn92\" (UID: \"fab2bff9-a63d-4213-b55b-c19d14831aa5\") " pod="openshift-multus/cni-sysctl-allowlist-ds-9bn92" Dec 12 14:12:36 crc kubenswrapper[5108]: E1212 14:12:36.950221 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:37.450197534 +0000 UTC m=+110.358188693 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.950448 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/fab2bff9-a63d-4213-b55b-c19d14831aa5-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-9bn92\" (UID: \"fab2bff9-a63d-4213-b55b-c19d14831aa5\") " pod="openshift-multus/cni-sysctl-allowlist-ds-9bn92" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.950642 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c37b74b9-0534-4df9-9af9-a1c10e3a9b89-socket-dir\") pod \"csi-hostpathplugin-gw5g9\" (UID: \"c37b74b9-0534-4df9-9af9-a1c10e3a9b89\") " pod="hostpath-provisioner/csi-hostpathplugin-gw5g9" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.950696 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-tx2lf" event={"ID":"b6da6d66-adc4-4cd5-968f-21877a7820f0","Type":"ContainerStarted","Data":"aecaeaac68eceea28f08563626f34d7d3f3563a3d970846498763e19a38337c2"} Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.950757 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-tx2lf" event={"ID":"b6da6d66-adc4-4cd5-968f-21877a7820f0","Type":"ContainerStarted","Data":"3ce1c24ea79c387fdd62c92220e54975a547982573e90c38b74079d8617351b5"} Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.950854 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c37b74b9-0534-4df9-9af9-a1c10e3a9b89-registration-dir\") pod \"csi-hostpathplugin-gw5g9\" (UID: \"c37b74b9-0534-4df9-9af9-a1c10e3a9b89\") " pod="hostpath-provisioner/csi-hostpathplugin-gw5g9" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.951923 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/13a39891-aaca-425b-adee-d73d8c5f7bcd-tmpfs\") pod \"catalog-operator-75ff9f647d-zlssf\" (UID: \"13a39891-aaca-425b-adee-d73d8c5f7bcd\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-zlssf" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.952566 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0222cddf-6d62-4ba5-924b-f1f64d892c84-config-volume\") pod \"dns-default-c8mpt\" (UID: \"0222cddf-6d62-4ba5-924b-f1f64d892c84\") " pod="openshift-dns/dns-default-c8mpt" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.952839 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-799b87ffcd-87mjz" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.953131 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/0222cddf-6d62-4ba5-924b-f1f64d892c84-tmp-dir\") pod \"dns-default-c8mpt\" (UID: \"0222cddf-6d62-4ba5-924b-f1f64d892c84\") " pod="openshift-dns/dns-default-c8mpt" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.953850 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/c37b74b9-0534-4df9-9af9-a1c10e3a9b89-plugins-dir\") pod \"csi-hostpathplugin-gw5g9\" (UID: \"c37b74b9-0534-4df9-9af9-a1c10e3a9b89\") " pod="hostpath-provisioner/csi-hostpathplugin-gw5g9" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.955488 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77db1b7a-74ff-4cbd-aacf-ac295b36d84c-config\") pod \"openshift-controller-manager-operator-686468bdd5-8rxm9\" (UID: \"77db1b7a-74ff-4cbd-aacf-ac295b36d84c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-8rxm9" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.954567 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/77db1b7a-74ff-4cbd-aacf-ac295b36d84c-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-8rxm9\" (UID: \"77db1b7a-74ff-4cbd-aacf-ac295b36d84c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-8rxm9" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.956164 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/e002b2da-aaa6-4bf5-93a0-0d08a9467038-signing-cabundle\") pod \"service-ca-74545575db-gdjh7\" (UID: \"e002b2da-aaa6-4bf5-93a0-0d08a9467038\") " pod="openshift-service-ca/service-ca-74545575db-gdjh7" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.956266 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/c37b74b9-0534-4df9-9af9-a1c10e3a9b89-csi-data-dir\") pod \"csi-hostpathplugin-gw5g9\" (UID: \"c37b74b9-0534-4df9-9af9-a1c10e3a9b89\") " pod="hostpath-provisioner/csi-hostpathplugin-gw5g9" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.956588 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/fab2bff9-a63d-4213-b55b-c19d14831aa5-ready\") pod \"cni-sysctl-allowlist-ds-9bn92\" (UID: \"fab2bff9-a63d-4213-b55b-c19d14831aa5\") " pod="openshift-multus/cni-sysctl-allowlist-ds-9bn92" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.956880 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-65b6cccf98-tx2lf" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.954041 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/c37b74b9-0534-4df9-9af9-a1c10e3a9b89-mountpoint-dir\") pod \"csi-hostpathplugin-gw5g9\" (UID: \"c37b74b9-0534-4df9-9af9-a1c10e3a9b89\") " pod="hostpath-provisioner/csi-hostpathplugin-gw5g9" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.957495 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/c21ce8a6-6a8e-4959-88ec-ebef7caa2a78-node-bootstrap-token\") pod \"machine-config-server-f6zss\" (UID: \"c21ce8a6-6a8e-4959-88ec-ebef7caa2a78\") " pod="openshift-machine-config-operator/machine-config-server-f6zss" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.958905 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.964394 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"b22f9adfac1a559761b36302fddb52bf8ecdbacd43582fa5635b709cd8d6dd18"} Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.964932 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.976932 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/13a39891-aaca-425b-adee-d73d8c5f7bcd-srv-cert\") pod \"catalog-operator-75ff9f647d-zlssf\" (UID: \"13a39891-aaca-425b-adee-d73d8c5f7bcd\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-zlssf" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.978568 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1000c2c0-f2b5-4e8b-b0c8-58ec52b524b1-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-qwvxc\" (UID: \"1000c2c0-f2b5-4e8b-b0c8-58ec52b524b1\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-qwvxc" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.979195 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/aa7e1be7-f229-433f-812a-b47e2c151895-cert\") pod \"ingress-canary-fcvr7\" (UID: \"aa7e1be7-f229-433f-812a-b47e2c151895\") " pod="openshift-ingress-canary/ingress-canary-fcvr7" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.979675 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/c21ce8a6-6a8e-4959-88ec-ebef7caa2a78-certs\") pod \"machine-config-server-f6zss\" (UID: \"c21ce8a6-6a8e-4959-88ec-ebef7caa2a78\") " pod="openshift-machine-config-operator/machine-config-server-f6zss" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.980854 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-67c89758df-cjjqx" event={"ID":"854f71d2-1dd7-45d6-b368-e879d3a14f59","Type":"ContainerStarted","Data":"b381c7d8ca70431ce0cc712517f6f3d181363dec0dfe85ace22bb77fbf4074db"} Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.981033 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0222cddf-6d62-4ba5-924b-f1f64d892c84-metrics-tls\") pod \"dns-default-c8mpt\" (UID: \"0222cddf-6d62-4ba5-924b-f1f64d892c84\") " pod="openshift-dns/dns-default-c8mpt" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.981198 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/13a39891-aaca-425b-adee-d73d8c5f7bcd-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-zlssf\" (UID: \"13a39891-aaca-425b-adee-d73d8c5f7bcd\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-zlssf" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.981635 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console-operator/console-operator-67c89758df-cjjqx" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.981745 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/e002b2da-aaa6-4bf5-93a0-0d08a9467038-signing-key\") pod \"service-ca-74545575db-gdjh7\" (UID: \"e002b2da-aaa6-4bf5-93a0-0d08a9467038\") " pod="openshift-service-ca/service-ca-74545575db-gdjh7" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.984597 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/77db1b7a-74ff-4cbd-aacf-ac295b36d84c-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-8rxm9\" (UID: \"77db1b7a-74ff-4cbd-aacf-ac295b36d84c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-8rxm9" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.985692 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-5k7p6" event={"ID":"4ec1fbc1-dd55-49ef-b374-28698de88e40","Type":"ContainerStarted","Data":"b820203457bc9bcb6374b7ac23e36443545cdaa4b7e9cb4bb5f96c8657beeee1"} Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.987293 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qqbnh\" (UniqueName: \"kubernetes.io/projected/c1d808ae-6ef1-4996-8f03-fa3102c0d165-kube-api-access-qqbnh\") pod \"machine-config-operator-67c9d58cbb-c7wpt\" (UID: \"c1d808ae-6ef1-4996-8f03-fa3102c0d165\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-c7wpt" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.987304 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-w9pnf" event={"ID":"b2a054ba-6a30-47c2-b042-8e859282af9c","Type":"ContainerStarted","Data":"01aecddecc567d4aa129c917eec78eea9bff701b9ff084bb81cecd9f5f62f3b5"} Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.987849 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-64d44f6ddf-np6kd" event={"ID":"d2d38bed-cc7f-4c81-a918-78814a48a49f","Type":"ContainerStarted","Data":"4a5591b3302c682102cb6e91bfd82cd9a49e87b3e01d9bf861588431972ca985"} Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.990413 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-wspns" event={"ID":"a0eab168-419a-4cb1-b318-244a89a1af5e","Type":"ContainerStarted","Data":"1c737981c381cebd8104199914fd1a4b06af585584456fca65311e19325d03b5"} Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.990773 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-wspns" Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.997084 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-bvv8g" event={"ID":"26503754-c774-4a77-8b46-f1bd96f096b4","Type":"ContainerStarted","Data":"92c2f1d70fcea2063f1099a6c436cc06f8d6e9fd31d849203c1997cac2651c46"} Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.997114 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-bvv8g" event={"ID":"26503754-c774-4a77-8b46-f1bd96f096b4","Type":"ContainerStarted","Data":"e0dcdf6b70fc28820a4af746d36cedf3586cdeb3fc944625fac95ff11a5c435f"} Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.997123 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-bvv8g" event={"ID":"26503754-c774-4a77-8b46-f1bd96f096b4","Type":"ContainerStarted","Data":"3f97042ac5ad120005b38aa7e1fb693539a784e512a2db185893bc5d150f289e"} Dec 12 14:12:36 crc kubenswrapper[5108]: I1212 14:12:36.998055 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-zsrsd" event={"ID":"a66581ea-fc96-4aba-9332-566bb17c7b71","Type":"ContainerStarted","Data":"4af4f10185949e0818e3aa33854b29171a936800476ebf7c519001881037789f"} Dec 12 14:12:37 crc kubenswrapper[5108]: I1212 14:12:37.003883 5108 patch_prober.go:28] interesting pod/route-controller-manager-776cdc94d6-wspns container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Dec 12 14:12:37 crc kubenswrapper[5108]: I1212 14:12:37.003920 5108 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-wspns" podUID="a0eab168-419a-4cb1-b318-244a89a1af5e" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" Dec 12 14:12:37 crc kubenswrapper[5108]: I1212 14:12:37.035054 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-v78jg\" (UniqueName: \"kubernetes.io/projected/2431091b-9e2c-4eb0-993a-a6893cb79df1-kube-api-access-v78jg\") pod \"etcd-operator-69b85846b6-r8lf9\" (UID: \"2431091b-9e2c-4eb0-993a-a6893cb79df1\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-r8lf9" Dec 12 14:12:37 crc kubenswrapper[5108]: I1212 14:12:37.053250 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-c7wpt" Dec 12 14:12:37 crc kubenswrapper[5108]: I1212 14:12:37.055128 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hdk9b\" (UID: \"78c92fa7-6dbe-4fef-8495-6dc6fe162b22\") " pod="openshift-image-registry/image-registry-66587d64c8-hdk9b" Dec 12 14:12:37 crc kubenswrapper[5108]: E1212 14:12:37.057807 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:37.557789541 +0000 UTC m=+110.465780700 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hdk9b" (UID: "78c92fa7-6dbe-4fef-8495-6dc6fe162b22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:37 crc kubenswrapper[5108]: I1212 14:12:37.061955 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vjnn\" (UniqueName: \"kubernetes.io/projected/b84e24d9-6489-40df-a5bc-a2f2e09fcbb7-kube-api-access-9vjnn\") pod \"router-default-68cf44c8b8-lcqd6\" (UID: \"b84e24d9-6489-40df-a5bc-a2f2e09fcbb7\") " pod="openshift-ingress/router-default-68cf44c8b8-lcqd6" Dec 12 14:12:37 crc kubenswrapper[5108]: I1212 14:12:37.074567 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-hzphx" Dec 12 14:12:37 crc kubenswrapper[5108]: I1212 14:12:37.140701 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hd8ks\" (UniqueName: \"kubernetes.io/projected/c45ee4a1-ebdf-4a47-853b-9cae5ac88246-kube-api-access-hd8ks\") pod \"multus-admission-controller-69db94689b-k6vgk\" (UID: \"c45ee4a1-ebdf-4a47-853b-9cae5ac88246\") " pod="openshift-multus/multus-admission-controller-69db94689b-k6vgk" Dec 12 14:12:37 crc kubenswrapper[5108]: I1212 14:12:37.141284 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6dmw\" (UniqueName: \"kubernetes.io/projected/72d68d2a-4326-4f35-b609-3d29fc70a888-kube-api-access-l6dmw\") pod \"service-ca-operator-5b9c976747-7tkh8\" (UID: \"72d68d2a-4326-4f35-b609-3d29fc70a888\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-7tkh8" Dec 12 14:12:37 crc kubenswrapper[5108]: I1212 14:12:37.141685 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zl87b\" (UniqueName: \"kubernetes.io/projected/1000c2c0-f2b5-4e8b-b0c8-58ec52b524b1-kube-api-access-zl87b\") pod \"cluster-image-registry-operator-86c45576b9-qwvxc\" (UID: \"1000c2c0-f2b5-4e8b-b0c8-58ec52b524b1\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-qwvxc" Dec 12 14:12:37 crc kubenswrapper[5108]: I1212 14:12:37.142230 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-z44px\" (UniqueName: \"kubernetes.io/projected/ea8bf4d2-1a9a-47e3-8013-e422b80a164b-kube-api-access-z44px\") pod \"olm-operator-5cdf44d969-g7zm7\" (UID: \"ea8bf4d2-1a9a-47e3-8013-e422b80a164b\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-g7zm7" Dec 12 14:12:37 crc kubenswrapper[5108]: I1212 14:12:37.144674 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqzmp\" (UniqueName: \"kubernetes.io/projected/e91b78a2-592a-42d0-af84-eb81d184bfd7-kube-api-access-wqzmp\") pod \"package-server-manager-77f986bd66-bjwlp\" (UID: \"e91b78a2-592a-42d0-af84-eb81d184bfd7\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-bjwlp" Dec 12 14:12:37 crc kubenswrapper[5108]: I1212 14:12:37.156562 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-qwvxc" Dec 12 14:12:37 crc kubenswrapper[5108]: I1212 14:12:37.158867 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:37 crc kubenswrapper[5108]: E1212 14:12:37.160254 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:37.66023551 +0000 UTC m=+110.568226669 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:37 crc kubenswrapper[5108]: I1212 14:12:37.162479 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7nl9v\" (UniqueName: \"kubernetes.io/projected/378c9b4b-6598-489c-9af4-b776c79341f6-kube-api-access-7nl9v\") pod \"marketplace-operator-547dbd544d-l5t96\" (UID: \"378c9b4b-6598-489c-9af4-b776c79341f6\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-l5t96" Dec 12 14:12:37 crc kubenswrapper[5108]: I1212 14:12:37.167250 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-69b85846b6-r8lf9" Dec 12 14:12:37 crc kubenswrapper[5108]: I1212 14:12:37.168329 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-67c89758df-cjjqx" Dec 12 14:12:37 crc kubenswrapper[5108]: I1212 14:12:37.176924 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-68cf44c8b8-lcqd6" Dec 12 14:12:37 crc kubenswrapper[5108]: I1212 14:12:37.189723 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vntlm\" (UniqueName: \"kubernetes.io/projected/5bcda332-2469-4e96-911f-6091f5ab33ee-kube-api-access-vntlm\") pod \"control-plane-machine-set-operator-75ffdb6fcd-77nhq\" (UID: \"5bcda332-2469-4e96-911f-6091f5ab33ee\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-77nhq" Dec 12 14:12:37 crc kubenswrapper[5108]: I1212 14:12:37.199806 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cf226324-c47f-4cdc-9d31-85a1295236a5-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-cljsg\" (UID: \"cf226324-c47f-4cdc-9d31-85a1295236a5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-cljsg" Dec 12 14:12:37 crc kubenswrapper[5108]: I1212 14:12:37.226545 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdwf2\" (UniqueName: \"kubernetes.io/projected/e8da381c-e396-48dd-a445-40d51d56858d-kube-api-access-zdwf2\") pod \"collect-profiles-29425800-msrsb\" (UID: \"e8da381c-e396-48dd-a445-40d51d56858d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425800-msrsb" Dec 12 14:12:37 crc kubenswrapper[5108]: I1212 14:12:37.234186 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-6dfcz" Dec 12 14:12:37 crc kubenswrapper[5108]: I1212 14:12:37.259339 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-bjwlp" Dec 12 14:12:37 crc kubenswrapper[5108]: I1212 14:12:37.260728 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hdk9b\" (UID: \"78c92fa7-6dbe-4fef-8495-6dc6fe162b22\") " pod="openshift-image-registry/image-registry-66587d64c8-hdk9b" Dec 12 14:12:37 crc kubenswrapper[5108]: E1212 14:12:37.261147 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:37.761130338 +0000 UTC m=+110.669121497 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hdk9b" (UID: "78c92fa7-6dbe-4fef-8495-6dc6fe162b22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:37 crc kubenswrapper[5108]: I1212 14:12:37.295318 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-69db94689b-k6vgk" Dec 12 14:12:37 crc kubenswrapper[5108]: I1212 14:12:37.307062 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdlhs\" (UniqueName: \"kubernetes.io/projected/e002b2da-aaa6-4bf5-93a0-0d08a9467038-kube-api-access-vdlhs\") pod \"service-ca-74545575db-gdjh7\" (UID: \"e002b2da-aaa6-4bf5-93a0-0d08a9467038\") " pod="openshift-service-ca/service-ca-74545575db-gdjh7" Dec 12 14:12:37 crc kubenswrapper[5108]: I1212 14:12:37.317208 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-l5t96" Dec 12 14:12:37 crc kubenswrapper[5108]: I1212 14:12:37.326323 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-g7zm7" Dec 12 14:12:37 crc kubenswrapper[5108]: I1212 14:12:37.330271 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xlmf\" (UniqueName: \"kubernetes.io/projected/c21ce8a6-6a8e-4959-88ec-ebef7caa2a78-kube-api-access-8xlmf\") pod \"machine-config-server-f6zss\" (UID: \"c21ce8a6-6a8e-4959-88ec-ebef7caa2a78\") " pod="openshift-machine-config-operator/machine-config-server-f6zss" Dec 12 14:12:37 crc kubenswrapper[5108]: I1212 14:12:37.335654 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-77nhq" Dec 12 14:12:37 crc kubenswrapper[5108]: I1212 14:12:37.347525 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29425800-msrsb" Dec 12 14:12:37 crc kubenswrapper[5108]: I1212 14:12:37.347648 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvfzh\" (UniqueName: \"kubernetes.io/projected/fab2bff9-a63d-4213-b55b-c19d14831aa5-kube-api-access-xvfzh\") pod \"cni-sysctl-allowlist-ds-9bn92\" (UID: \"fab2bff9-a63d-4213-b55b-c19d14831aa5\") " pod="openshift-multus/cni-sysctl-allowlist-ds-9bn92" Dec 12 14:12:37 crc kubenswrapper[5108]: I1212 14:12:37.361979 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-grzfm\" (UniqueName: \"kubernetes.io/projected/c37b74b9-0534-4df9-9af9-a1c10e3a9b89-kube-api-access-grzfm\") pod \"csi-hostpathplugin-gw5g9\" (UID: \"c37b74b9-0534-4df9-9af9-a1c10e3a9b89\") " pod="hostpath-provisioner/csi-hostpathplugin-gw5g9" Dec 12 14:12:37 crc kubenswrapper[5108]: I1212 14:12:37.362239 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-7tkh8" Dec 12 14:12:37 crc kubenswrapper[5108]: I1212 14:12:37.362569 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:37 crc kubenswrapper[5108]: E1212 14:12:37.362723 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:37.862698314 +0000 UTC m=+110.770689473 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:37 crc kubenswrapper[5108]: I1212 14:12:37.363174 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hdk9b\" (UID: \"78c92fa7-6dbe-4fef-8495-6dc6fe162b22\") " pod="openshift-image-registry/image-registry-66587d64c8-hdk9b" Dec 12 14:12:37 crc kubenswrapper[5108]: I1212 14:12:37.363402 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-fmnpk"] Dec 12 14:12:37 crc kubenswrapper[5108]: E1212 14:12:37.363542 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:37.863531867 +0000 UTC m=+110.771523026 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hdk9b" (UID: "78c92fa7-6dbe-4fef-8495-6dc6fe162b22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:37 crc kubenswrapper[5108]: I1212 14:12:37.381341 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2g7pc\" (UniqueName: \"kubernetes.io/projected/0222cddf-6d62-4ba5-924b-f1f64d892c84-kube-api-access-2g7pc\") pod \"dns-default-c8mpt\" (UID: \"0222cddf-6d62-4ba5-924b-f1f64d892c84\") " pod="openshift-dns/dns-default-c8mpt" Dec 12 14:12:37 crc kubenswrapper[5108]: I1212 14:12:37.390361 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbbfq\" (UniqueName: \"kubernetes.io/projected/77db1b7a-74ff-4cbd-aacf-ac295b36d84c-kube-api-access-nbbfq\") pod \"openshift-controller-manager-operator-686468bdd5-8rxm9\" (UID: \"77db1b7a-74ff-4cbd-aacf-ac295b36d84c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-8rxm9" Dec 12 14:12:37 crc kubenswrapper[5108]: I1212 14:12:37.393043 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-dzxhh"] Dec 12 14:12:37 crc kubenswrapper[5108]: W1212 14:12:37.395739 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb84e24d9_6489_40df_a5bc_a2f2e09fcbb7.slice/crio-f57b47d4a2563ae19c1d9a319a4778974ff51bfa707474f1be26f96dfe5037d7 WatchSource:0}: Error finding container f57b47d4a2563ae19c1d9a319a4778974ff51bfa707474f1be26f96dfe5037d7: Status 404 returned error can't find the container with id f57b47d4a2563ae19c1d9a319a4778974ff51bfa707474f1be26f96dfe5037d7 Dec 12 14:12:37 crc kubenswrapper[5108]: I1212 14:12:37.400982 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-74545575db-gdjh7" Dec 12 14:12:37 crc kubenswrapper[5108]: I1212 14:12:37.402051 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdhkn\" (UniqueName: \"kubernetes.io/projected/aa7e1be7-f229-433f-812a-b47e2c151895-kube-api-access-vdhkn\") pod \"ingress-canary-fcvr7\" (UID: \"aa7e1be7-f229-433f-812a-b47e2c151895\") " pod="openshift-ingress-canary/ingress-canary-fcvr7" Dec 12 14:12:37 crc kubenswrapper[5108]: I1212 14:12:37.403774 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jpxwn\" (UniqueName: \"kubernetes.io/projected/13a39891-aaca-425b-adee-d73d8c5f7bcd-kube-api-access-jpxwn\") pod \"catalog-operator-75ff9f647d-zlssf\" (UID: \"13a39891-aaca-425b-adee-d73d8c5f7bcd\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-zlssf" Dec 12 14:12:37 crc kubenswrapper[5108]: I1212 14:12:37.405268 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-wz7d2"] Dec 12 14:12:37 crc kubenswrapper[5108]: I1212 14:12:37.407019 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-fcvr7" Dec 12 14:12:37 crc kubenswrapper[5108]: I1212 14:12:37.430608 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-gw5g9" Dec 12 14:12:37 crc kubenswrapper[5108]: I1212 14:12:37.446512 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-67c89758df-cjjqx" podStartSLOduration=90.446490542 podStartE2EDuration="1m30.446490542s" podCreationTimestamp="2025-12-12 14:11:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 14:12:37.446428201 +0000 UTC m=+110.354419360" watchObservedRunningTime="2025-12-12 14:12:37.446490542 +0000 UTC m=+110.354481701" Dec 12 14:12:37 crc kubenswrapper[5108]: I1212 14:12:37.446695 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-f6zss" Dec 12 14:12:37 crc kubenswrapper[5108]: I1212 14:12:37.453489 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-9bn92" Dec 12 14:12:37 crc kubenswrapper[5108]: I1212 14:12:37.459115 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-c8mpt" Dec 12 14:12:37 crc kubenswrapper[5108]: I1212 14:12:37.464305 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:37 crc kubenswrapper[5108]: E1212 14:12:37.464777 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:37.964760583 +0000 UTC m=+110.872751742 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:37 crc kubenswrapper[5108]: I1212 14:12:37.489313 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-cljsg" Dec 12 14:12:37 crc kubenswrapper[5108]: W1212 14:12:37.530719 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod36f48e3e_03a0_42fc_ab1d_37c77fb10f65.slice/crio-3461b67fc557c7f4ce49efb9978277228f3d1df7827dbf26325fb431d09a85ce WatchSource:0}: Error finding container 3461b67fc557c7f4ce49efb9978277228f3d1df7827dbf26325fb431d09a85ce: Status 404 returned error can't find the container with id 3461b67fc557c7f4ce49efb9978277228f3d1df7827dbf26325fb431d09a85ce Dec 12 14:12:37 crc kubenswrapper[5108]: W1212 14:12:37.545702 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd7766476_9154_440b_b5a3_cce6b6b7c7b4.slice/crio-b476a770ebbf9bb795fc5798a26b9156bb16a29883e23f0335d600a665af247a WatchSource:0}: Error finding container b476a770ebbf9bb795fc5798a26b9156bb16a29883e23f0335d600a665af247a: Status 404 returned error can't find the container with id b476a770ebbf9bb795fc5798a26b9156bb16a29883e23f0335d600a665af247a Dec 12 14:12:37 crc kubenswrapper[5108]: I1212 14:12:37.563276 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-xhpw5" podStartSLOduration=90.563254165 podStartE2EDuration="1m30.563254165s" podCreationTimestamp="2025-12-12 14:11:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 14:12:37.559071973 +0000 UTC m=+110.467063132" watchObservedRunningTime="2025-12-12 14:12:37.563254165 +0000 UTC m=+110.471245324" Dec 12 14:12:37 crc kubenswrapper[5108]: I1212 14:12:37.566240 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hdk9b\" (UID: \"78c92fa7-6dbe-4fef-8495-6dc6fe162b22\") " pod="openshift-image-registry/image-registry-66587d64c8-hdk9b" Dec 12 14:12:37 crc kubenswrapper[5108]: E1212 14:12:37.566939 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:38.066920494 +0000 UTC m=+110.974911653 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hdk9b" (UID: "78c92fa7-6dbe-4fef-8495-6dc6fe162b22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:37 crc kubenswrapper[5108]: I1212 14:12:37.609462 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=18.609450145 podStartE2EDuration="18.609450145s" podCreationTimestamp="2025-12-12 14:12:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 14:12:37.608706446 +0000 UTC m=+110.516697615" watchObservedRunningTime="2025-12-12 14:12:37.609450145 +0000 UTC m=+110.517441294" Dec 12 14:12:37 crc kubenswrapper[5108]: I1212 14:12:37.667438 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:37 crc kubenswrapper[5108]: E1212 14:12:37.667817 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:38.167801681 +0000 UTC m=+111.075792840 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:37 crc kubenswrapper[5108]: I1212 14:12:37.680602 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-zlssf" Dec 12 14:12:37 crc kubenswrapper[5108]: I1212 14:12:37.692304 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-8rxm9" Dec 12 14:12:37 crc kubenswrapper[5108]: E1212 14:12:37.761972 5108 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda66581ea_fc96_4aba_9332_566bb17c7b71.slice/crio-conmon-6a241310c089e1d657c86672491633d8568493618be3c8e9bb2ff6f0611d1d71.scope\": RecentStats: unable to find data in memory cache]" Dec 12 14:12:37 crc kubenswrapper[5108]: I1212 14:12:37.769287 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hdk9b\" (UID: \"78c92fa7-6dbe-4fef-8495-6dc6fe162b22\") " pod="openshift-image-registry/image-registry-66587d64c8-hdk9b" Dec 12 14:12:37 crc kubenswrapper[5108]: E1212 14:12:37.770059 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:38.270041525 +0000 UTC m=+111.178032684 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hdk9b" (UID: "78c92fa7-6dbe-4fef-8495-6dc6fe162b22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:37 crc kubenswrapper[5108]: I1212 14:12:37.875363 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:37 crc kubenswrapper[5108]: E1212 14:12:37.890006 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:38.389982964 +0000 UTC m=+111.297974123 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:37 crc kubenswrapper[5108]: I1212 14:12:37.890271 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-bvv8g" podStartSLOduration=90.89023389 podStartE2EDuration="1m30.89023389s" podCreationTimestamp="2025-12-12 14:11:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 14:12:37.882347748 +0000 UTC m=+110.790338917" watchObservedRunningTime="2025-12-12 14:12:37.89023389 +0000 UTC m=+110.798225049" Dec 12 14:12:37 crc kubenswrapper[5108]: I1212 14:12:37.954630 5108 patch_prober.go:28] interesting pod/controller-manager-65b6cccf98-tx2lf container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 14:12:37 crc kubenswrapper[5108]: I1212 14:12:37.954708 5108 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-65b6cccf98-tx2lf" podUID="b6da6d66-adc4-4cd5-968f-21877a7820f0" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 12 14:12:37 crc kubenswrapper[5108]: I1212 14:12:37.967031 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-blzxz"] Dec 12 14:12:37 crc kubenswrapper[5108]: I1212 14:12:37.992864 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hdk9b\" (UID: \"78c92fa7-6dbe-4fef-8495-6dc6fe162b22\") " pod="openshift-image-registry/image-registry-66587d64c8-hdk9b" Dec 12 14:12:37 crc kubenswrapper[5108]: E1212 14:12:37.993266 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:38.493252404 +0000 UTC m=+111.401243563 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hdk9b" (UID: "78c92fa7-6dbe-4fef-8495-6dc6fe162b22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:38 crc kubenswrapper[5108]: I1212 14:12:38.010043 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-9dtpr"] Dec 12 14:12:38 crc kubenswrapper[5108]: I1212 14:12:38.014669 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-wz7d2" event={"ID":"ae92c75f-face-43b3-8dd7-011d99508d20","Type":"ContainerStarted","Data":"483e13b8bc060a03282f8e36455cfd211a090a5aa55b7943472616779f37b393"} Dec 12 14:12:38 crc kubenswrapper[5108]: I1212 14:12:38.016994 5108 generic.go:358] "Generic (PLEG): container finished" podID="4ec1fbc1-dd55-49ef-b374-28698de88e40" containerID="82c1364d963958ecabc43a35648cd1a09c72390caf85768d6a1df19bced6843d" exitCode=0 Dec 12 14:12:38 crc kubenswrapper[5108]: I1212 14:12:38.017070 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-5k7p6" event={"ID":"4ec1fbc1-dd55-49ef-b374-28698de88e40","Type":"ContainerDied","Data":"82c1364d963958ecabc43a35648cd1a09c72390caf85768d6a1df19bced6843d"} Dec 12 14:12:38 crc kubenswrapper[5108]: I1212 14:12:38.017262 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-8gpql"] Dec 12 14:12:38 crc kubenswrapper[5108]: I1212 14:12:38.021393 5108 generic.go:358] "Generic (PLEG): container finished" podID="a66581ea-fc96-4aba-9332-566bb17c7b71" containerID="6a241310c089e1d657c86672491633d8568493618be3c8e9bb2ff6f0611d1d71" exitCode=0 Dec 12 14:12:38 crc kubenswrapper[5108]: I1212 14:12:38.021461 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-zsrsd" event={"ID":"a66581ea-fc96-4aba-9332-566bb17c7b71","Type":"ContainerDied","Data":"6a241310c089e1d657c86672491633d8568493618be3c8e9bb2ff6f0611d1d71"} Dec 12 14:12:38 crc kubenswrapper[5108]: I1212 14:12:38.027500 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-qwk4x" event={"ID":"4149b83c-6a14-4f2f-b097-e59fcb47b122","Type":"ContainerStarted","Data":"70e461d102aa4cbdac3840d87adee158ca4021a7e22daa32babb5550a6137ab1"} Dec 12 14:12:38 crc kubenswrapper[5108]: I1212 14:12:38.031629 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-9bn92" event={"ID":"fab2bff9-a63d-4213-b55b-c19d14831aa5","Type":"ContainerStarted","Data":"169237c0d573255b15ee0e04266328cb4a6cd2bbe9de8eaeb8a133e88e0bf90c"} Dec 12 14:12:38 crc kubenswrapper[5108]: I1212 14:12:38.038199 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-hrj8v" event={"ID":"29c8dcea-f999-4e33-9f5e-ef9eb8a423f7","Type":"ContainerStarted","Data":"d36323d29f1dfb4474cd224a5b3fe0f1e07de8a8eedc276fdfc50aadfd4d5c3f"} Dec 12 14:12:38 crc kubenswrapper[5108]: I1212 14:12:38.038246 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-hrj8v" event={"ID":"29c8dcea-f999-4e33-9f5e-ef9eb8a423f7","Type":"ContainerStarted","Data":"d64e0b567774e733c9b5569924b099f22b6ca66fcde1dbe9c9a676cdf9d3bc38"} Dec 12 14:12:38 crc kubenswrapper[5108]: I1212 14:12:38.038874 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/downloads-747b44746d-hrj8v" Dec 12 14:12:38 crc kubenswrapper[5108]: I1212 14:12:38.044015 5108 patch_prober.go:28] interesting pod/downloads-747b44746d-hrj8v container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.30:8080/\": dial tcp 10.217.0.30:8080: connect: connection refused" start-of-body= Dec 12 14:12:38 crc kubenswrapper[5108]: I1212 14:12:38.044095 5108 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-hrj8v" podUID="29c8dcea-f999-4e33-9f5e-ef9eb8a423f7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.30:8080/\": dial tcp 10.217.0.30:8080: connect: connection refused" Dec 12 14:12:38 crc kubenswrapper[5108]: I1212 14:12:38.044146 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-f6zss" event={"ID":"c21ce8a6-6a8e-4959-88ec-ebef7caa2a78","Type":"ContainerStarted","Data":"2343b735ee4aaf0487cf055dbad0044f4753d640a6cf7afd35bdc3633bc4294b"} Dec 12 14:12:38 crc kubenswrapper[5108]: I1212 14:12:38.050862 5108 generic.go:358] "Generic (PLEG): container finished" podID="b2a054ba-6a30-47c2-b042-8e859282af9c" containerID="486450e22e035a125d4c8f2565c299008a98976a7d27cd4ea4ce500b4c8c204e" exitCode=0 Dec 12 14:12:38 crc kubenswrapper[5108]: I1212 14:12:38.052351 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-w9pnf" event={"ID":"b2a054ba-6a30-47c2-b042-8e859282af9c","Type":"ContainerDied","Data":"486450e22e035a125d4c8f2565c299008a98976a7d27cd4ea4ce500b4c8c204e"} Dec 12 14:12:38 crc kubenswrapper[5108]: I1212 14:12:38.055212 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-64d44f6ddf-np6kd" event={"ID":"d2d38bed-cc7f-4c81-a918-78814a48a49f","Type":"ContainerStarted","Data":"b4065a4243844528688343b95b8127b4c6e3c66798a91d17664ae82582a6c019"} Dec 12 14:12:38 crc kubenswrapper[5108]: I1212 14:12:38.058284 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-wspns" event={"ID":"a0eab168-419a-4cb1-b318-244a89a1af5e","Type":"ContainerStarted","Data":"f206e2a1c591e80d92d202ba0f98e77da61db894efa3e1bc4ee38378594a8c11"} Dec 12 14:12:38 crc kubenswrapper[5108]: I1212 14:12:38.061496 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-68cf44c8b8-lcqd6" event={"ID":"b84e24d9-6489-40df-a5bc-a2f2e09fcbb7","Type":"ContainerStarted","Data":"f57b47d4a2563ae19c1d9a319a4778974ff51bfa707474f1be26f96dfe5037d7"} Dec 12 14:12:38 crc kubenswrapper[5108]: I1212 14:12:38.068776 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-ztlbz" event={"ID":"8e90fae7-1eff-4924-ba9c-a1325c4099e9","Type":"ContainerStarted","Data":"9b70278d895229a3d324b5e6dd64be069b09fc48763beb73290a5dfe30936dac"} Dec 12 14:12:38 crc kubenswrapper[5108]: I1212 14:12:38.074041 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-fmnpk" event={"ID":"d7766476-9154-440b-b5a3-cce6b6b7c7b4","Type":"ContainerStarted","Data":"b476a770ebbf9bb795fc5798a26b9156bb16a29883e23f0335d600a665af247a"} Dec 12 14:12:38 crc kubenswrapper[5108]: I1212 14:12:38.077324 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-dzxhh" event={"ID":"36f48e3e-03a0-42fc-ab1d-37c77fb10f65","Type":"ContainerStarted","Data":"3461b67fc557c7f4ce49efb9978277228f3d1df7827dbf26325fb431d09a85ce"} Dec 12 14:12:38 crc kubenswrapper[5108]: I1212 14:12:38.094223 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:38 crc kubenswrapper[5108]: E1212 14:12:38.094603 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:38.594583994 +0000 UTC m=+111.502575153 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:38 crc kubenswrapper[5108]: I1212 14:12:38.122755 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-65b6cccf98-tx2lf" podStartSLOduration=91.12274043 podStartE2EDuration="1m31.12274043s" podCreationTimestamp="2025-12-12 14:11:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 14:12:38.122570405 +0000 UTC m=+111.030561584" watchObservedRunningTime="2025-12-12 14:12:38.12274043 +0000 UTC m=+111.030731589" Dec 12 14:12:38 crc kubenswrapper[5108]: I1212 14:12:38.179716 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-68cf44c8b8-lcqd6" Dec 12 14:12:38 crc kubenswrapper[5108]: I1212 14:12:38.195470 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hdk9b\" (UID: \"78c92fa7-6dbe-4fef-8495-6dc6fe162b22\") " pod="openshift-image-registry/image-registry-66587d64c8-hdk9b" Dec 12 14:12:38 crc kubenswrapper[5108]: E1212 14:12:38.196801 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:38.696788047 +0000 UTC m=+111.604779196 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hdk9b" (UID: "78c92fa7-6dbe-4fef-8495-6dc6fe162b22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:38 crc kubenswrapper[5108]: I1212 14:12:38.226970 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-65b6cccf98-tx2lf" Dec 12 14:12:38 crc kubenswrapper[5108]: I1212 14:12:38.230650 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-kd9gm"] Dec 12 14:12:38 crc kubenswrapper[5108]: I1212 14:12:38.276537 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-wspns" Dec 12 14:12:38 crc kubenswrapper[5108]: I1212 14:12:38.298444 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:38 crc kubenswrapper[5108]: E1212 14:12:38.298836 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:38.798820395 +0000 UTC m=+111.706811554 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:38 crc kubenswrapper[5108]: I1212 14:12:38.305582 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-87mjz"] Dec 12 14:12:38 crc kubenswrapper[5108]: I1212 14:12:38.418018 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hdk9b\" (UID: \"78c92fa7-6dbe-4fef-8495-6dc6fe162b22\") " pod="openshift-image-registry/image-registry-66587d64c8-hdk9b" Dec 12 14:12:38 crc kubenswrapper[5108]: E1212 14:12:38.418531 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:38.918516176 +0000 UTC m=+111.826507335 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hdk9b" (UID: "78c92fa7-6dbe-4fef-8495-6dc6fe162b22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:38 crc kubenswrapper[5108]: I1212 14:12:38.440292 5108 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-lcqd6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 14:12:38 crc kubenswrapper[5108]: [-]has-synced failed: reason withheld Dec 12 14:12:38 crc kubenswrapper[5108]: [+]process-running ok Dec 12 14:12:38 crc kubenswrapper[5108]: healthz check failed Dec 12 14:12:38 crc kubenswrapper[5108]: I1212 14:12:38.440364 5108 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-lcqd6" podUID="b84e24d9-6489-40df-a5bc-a2f2e09fcbb7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 14:12:38 crc kubenswrapper[5108]: I1212 14:12:38.529453 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:38 crc kubenswrapper[5108]: E1212 14:12:38.535508 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:39.035475305 +0000 UTC m=+111.943466474 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:38 crc kubenswrapper[5108]: I1212 14:12:38.640789 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hdk9b\" (UID: \"78c92fa7-6dbe-4fef-8495-6dc6fe162b22\") " pod="openshift-image-registry/image-registry-66587d64c8-hdk9b" Dec 12 14:12:38 crc kubenswrapper[5108]: E1212 14:12:38.641135 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:39.14112172 +0000 UTC m=+112.049112879 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hdk9b" (UID: "78c92fa7-6dbe-4fef-8495-6dc6fe162b22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:38 crc kubenswrapper[5108]: I1212 14:12:38.743520 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:38 crc kubenswrapper[5108]: E1212 14:12:38.744135 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:39.244106654 +0000 UTC m=+112.152097813 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:38 crc kubenswrapper[5108]: I1212 14:12:38.844748 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hdk9b\" (UID: \"78c92fa7-6dbe-4fef-8495-6dc6fe162b22\") " pod="openshift-image-registry/image-registry-66587d64c8-hdk9b" Dec 12 14:12:38 crc kubenswrapper[5108]: E1212 14:12:38.845376 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:39.345361611 +0000 UTC m=+112.253352780 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hdk9b" (UID: "78c92fa7-6dbe-4fef-8495-6dc6fe162b22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:38 crc kubenswrapper[5108]: I1212 14:12:38.961230 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:38 crc kubenswrapper[5108]: E1212 14:12:38.962214 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:39.462189315 +0000 UTC m=+112.370180474 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:38 crc kubenswrapper[5108]: I1212 14:12:38.971055 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-wspns" podStartSLOduration=91.971008833 podStartE2EDuration="1m31.971008833s" podCreationTimestamp="2025-12-12 14:11:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 14:12:38.959747101 +0000 UTC m=+111.867738270" watchObservedRunningTime="2025-12-12 14:12:38.971008833 +0000 UTC m=+111.878999982" Dec 12 14:12:39 crc kubenswrapper[5108]: I1212 14:12:39.063411 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hdk9b\" (UID: \"78c92fa7-6dbe-4fef-8495-6dc6fe162b22\") " pod="openshift-image-registry/image-registry-66587d64c8-hdk9b" Dec 12 14:12:39 crc kubenswrapper[5108]: E1212 14:12:39.063739 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:39.563727651 +0000 UTC m=+112.471718810 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hdk9b" (UID: "78c92fa7-6dbe-4fef-8495-6dc6fe162b22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:39 crc kubenswrapper[5108]: I1212 14:12:39.093419 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-755bb95488-ztlbz" podStartSLOduration=92.093296294 podStartE2EDuration="1m32.093296294s" podCreationTimestamp="2025-12-12 14:11:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 14:12:39.093298424 +0000 UTC m=+112.001289583" watchObservedRunningTime="2025-12-12 14:12:39.093296294 +0000 UTC m=+112.001287453" Dec 12 14:12:39 crc kubenswrapper[5108]: I1212 14:12:39.126282 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-8gpql" event={"ID":"02ff303a-02e9-48e1-a9c5-919ddbc4988a","Type":"ContainerStarted","Data":"d5054790f759221c44ae4e4e89af698c6c0a43f374a9ba45b39c42e0462a30fe"} Dec 12 14:12:39 crc kubenswrapper[5108]: I1212 14:12:39.166305 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:39 crc kubenswrapper[5108]: E1212 14:12:39.167001 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:39.666970371 +0000 UTC m=+112.574961530 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:39 crc kubenswrapper[5108]: I1212 14:12:39.168301 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-l5t96"] Dec 12 14:12:39 crc kubenswrapper[5108]: I1212 14:12:39.175258 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-hzphx"] Dec 12 14:12:39 crc kubenswrapper[5108]: I1212 14:12:39.192914 5108 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-lcqd6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 14:12:39 crc kubenswrapper[5108]: [-]has-synced failed: reason withheld Dec 12 14:12:39 crc kubenswrapper[5108]: [+]process-running ok Dec 12 14:12:39 crc kubenswrapper[5108]: healthz check failed Dec 12 14:12:39 crc kubenswrapper[5108]: I1212 14:12:39.192994 5108 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-lcqd6" podUID="b84e24d9-6489-40df-a5bc-a2f2e09fcbb7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 14:12:39 crc kubenswrapper[5108]: I1212 14:12:39.196437 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-c7wpt"] Dec 12 14:12:39 crc kubenswrapper[5108]: I1212 14:12:39.196656 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-747b44746d-hrj8v" podStartSLOduration=92.196636457 podStartE2EDuration="1m32.196636457s" podCreationTimestamp="2025-12-12 14:11:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 14:12:39.191250533 +0000 UTC m=+112.099241712" watchObservedRunningTime="2025-12-12 14:12:39.196636457 +0000 UTC m=+112.104627616" Dec 12 14:12:39 crc kubenswrapper[5108]: I1212 14:12:39.218472 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-msx9x"] Dec 12 14:12:39 crc kubenswrapper[5108]: I1212 14:12:39.227639 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-zsrsd" event={"ID":"a66581ea-fc96-4aba-9332-566bb17c7b71","Type":"ContainerStarted","Data":"44cb24d122eb223ea4c52a4ae9499e386969aa1e7dc633618a59ce712b5a31c8"} Dec 12 14:12:39 crc kubenswrapper[5108]: I1212 14:12:39.230961 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-68cf44c8b8-lcqd6" podStartSLOduration=92.230946598 podStartE2EDuration="1m32.230946598s" podCreationTimestamp="2025-12-12 14:11:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 14:12:39.229833888 +0000 UTC m=+112.137825067" watchObservedRunningTime="2025-12-12 14:12:39.230946598 +0000 UTC m=+112.138937757" Dec 12 14:12:39 crc kubenswrapper[5108]: I1212 14:12:39.247854 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-config-operator/openshift-config-operator-5777786469-zsrsd" Dec 12 14:12:39 crc kubenswrapper[5108]: I1212 14:12:39.269422 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-9dtpr" event={"ID":"c24a841d-7735-4fc3-b1b8-df1af8ae4328","Type":"ContainerStarted","Data":"2460d3855507d934c366f1d24682db77168db6f1ee18924d0d69723f2e809456"} Dec 12 14:12:39 crc kubenswrapper[5108]: I1212 14:12:39.275773 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hdk9b\" (UID: \"78c92fa7-6dbe-4fef-8495-6dc6fe162b22\") " pod="openshift-image-registry/image-registry-66587d64c8-hdk9b" Dec 12 14:12:39 crc kubenswrapper[5108]: E1212 14:12:39.276360 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:39.776341296 +0000 UTC m=+112.684332455 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hdk9b" (UID: "78c92fa7-6dbe-4fef-8495-6dc6fe162b22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:39 crc kubenswrapper[5108]: I1212 14:12:39.291978 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-9bn92" event={"ID":"fab2bff9-a63d-4213-b55b-c19d14831aa5","Type":"ContainerStarted","Data":"ab4cc74fd6f191761fb8e5e4a4df86e35a7ee6203fb2b8458d6522d41b931076"} Dec 12 14:12:39 crc kubenswrapper[5108]: I1212 14:12:39.298241 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-multus/cni-sysctl-allowlist-ds-9bn92" Dec 12 14:12:39 crc kubenswrapper[5108]: I1212 14:12:39.333919 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-f6zss" event={"ID":"c21ce8a6-6a8e-4959-88ec-ebef7caa2a78","Type":"ContainerStarted","Data":"51db6d86b896ab27988481ed5c64f86d55fda986478d44d9b7c05a23c59f95a4"} Dec 12 14:12:39 crc kubenswrapper[5108]: I1212 14:12:39.346429 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-64d44f6ddf-np6kd" podStartSLOduration=92.346388976 podStartE2EDuration="1m32.346388976s" podCreationTimestamp="2025-12-12 14:11:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 14:12:39.276212493 +0000 UTC m=+112.184203662" watchObservedRunningTime="2025-12-12 14:12:39.346388976 +0000 UTC m=+112.254380135" Dec 12 14:12:39 crc kubenswrapper[5108]: I1212 14:12:39.349634 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-qwvxc"] Dec 12 14:12:39 crc kubenswrapper[5108]: I1212 14:12:39.386495 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:39 crc kubenswrapper[5108]: E1212 14:12:39.387747 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:39.887725735 +0000 UTC m=+112.795716904 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:39 crc kubenswrapper[5108]: I1212 14:12:39.388336 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-r8lf9"] Dec 12 14:12:39 crc kubenswrapper[5108]: I1212 14:12:39.388448 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-multus/cni-sysctl-allowlist-ds-9bn92" Dec 12 14:12:39 crc kubenswrapper[5108]: I1212 14:12:39.390488 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-bjwlp"] Dec 12 14:12:39 crc kubenswrapper[5108]: I1212 14:12:39.405565 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-kd9gm" event={"ID":"365b2743-bb33-4572-bd95-22945397200b","Type":"ContainerStarted","Data":"6da7fc5aed4c5039a53dc8b5d74b81ef7b4e011ec039cc1a52d040affedfe455"} Dec 12 14:12:39 crc kubenswrapper[5108]: I1212 14:12:39.427133 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-68cf44c8b8-lcqd6" event={"ID":"b84e24d9-6489-40df-a5bc-a2f2e09fcbb7","Type":"ContainerStarted","Data":"9f5d90d2a75213345d8c27be7468d64a70d069ad6565886fbfe2352e85605479"} Dec 12 14:12:39 crc kubenswrapper[5108]: I1212 14:12:39.431419 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-6dfcz"] Dec 12 14:12:39 crc kubenswrapper[5108]: I1212 14:12:39.456046 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-77nhq"] Dec 12 14:12:39 crc kubenswrapper[5108]: I1212 14:12:39.456123 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-g7zm7"] Dec 12 14:12:39 crc kubenswrapper[5108]: I1212 14:12:39.458732 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29425800-msrsb"] Dec 12 14:12:39 crc kubenswrapper[5108]: I1212 14:12:39.466501 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-fmnpk" event={"ID":"d7766476-9154-440b-b5a3-cce6b6b7c7b4","Type":"ContainerStarted","Data":"6dace55a37986ff49a4b90e02773dd7f4aa892bc7c733ea0b3914ca34b010529"} Dec 12 14:12:39 crc kubenswrapper[5108]: I1212 14:12:39.472643 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-54c688565-qwk4x" podStartSLOduration=92.472622763 podStartE2EDuration="1m32.472622763s" podCreationTimestamp="2025-12-12 14:11:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 14:12:39.396170751 +0000 UTC m=+112.304161920" watchObservedRunningTime="2025-12-12 14:12:39.472622763 +0000 UTC m=+112.380613922" Dec 12 14:12:39 crc kubenswrapper[5108]: I1212 14:12:39.472965 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-74545575db-gdjh7"] Dec 12 14:12:39 crc kubenswrapper[5108]: I1212 14:12:39.485255 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-dzxhh" event={"ID":"36f48e3e-03a0-42fc-ab1d-37c77fb10f65","Type":"ContainerStarted","Data":"a628ed31ce78f93200c8189168f9c27c9f2f9f8102920fd782c2c9b8c308c6d4"} Dec 12 14:12:39 crc kubenswrapper[5108]: I1212 14:12:39.490480 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hdk9b\" (UID: \"78c92fa7-6dbe-4fef-8495-6dc6fe162b22\") " pod="openshift-image-registry/image-registry-66587d64c8-hdk9b" Dec 12 14:12:39 crc kubenswrapper[5108]: E1212 14:12:39.493397 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:39.993383161 +0000 UTC m=+112.901374320 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hdk9b" (UID: "78c92fa7-6dbe-4fef-8495-6dc6fe162b22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:39 crc kubenswrapper[5108]: W1212 14:12:39.550137 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode8da381c_e396_48dd_a445_40d51d56858d.slice/crio-036138a9cbc6a11510c97c5566588a062e7885407d64621ab351105054843c4c WatchSource:0}: Error finding container 036138a9cbc6a11510c97c5566588a062e7885407d64621ab351105054843c4c: Status 404 returned error can't find the container with id 036138a9cbc6a11510c97c5566588a062e7885407d64621ab351105054843c4c Dec 12 14:12:39 crc kubenswrapper[5108]: I1212 14:12:39.557477 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-5777786469-zsrsd" podStartSLOduration=92.55745946 podStartE2EDuration="1m32.55745946s" podCreationTimestamp="2025-12-12 14:11:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 14:12:39.555416005 +0000 UTC m=+112.463407174" watchObservedRunningTime="2025-12-12 14:12:39.55745946 +0000 UTC m=+112.465450619" Dec 12 14:12:39 crc kubenswrapper[5108]: I1212 14:12:39.561359 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-blzxz" event={"ID":"9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26","Type":"ContainerStarted","Data":"e924e3462ad378699e8a705bef7b0a9f61f76711930516190cc813395f53179e"} Dec 12 14:12:39 crc kubenswrapper[5108]: I1212 14:12:39.571807 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-authentication/oauth-openshift-66458b6674-blzxz" Dec 12 14:12:39 crc kubenswrapper[5108]: I1212 14:12:39.580703 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-fmnpk" podStartSLOduration=92.580682813 podStartE2EDuration="1m32.580682813s" podCreationTimestamp="2025-12-12 14:11:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 14:12:39.574691762 +0000 UTC m=+112.482682931" watchObservedRunningTime="2025-12-12 14:12:39.580682813 +0000 UTC m=+112.488673972" Dec 12 14:12:39 crc kubenswrapper[5108]: I1212 14:12:39.581869 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-k6vgk"] Dec 12 14:12:39 crc kubenswrapper[5108]: I1212 14:12:39.592598 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:39 crc kubenswrapper[5108]: E1212 14:12:39.594507 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:40.094462123 +0000 UTC m=+113.002453282 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:39 crc kubenswrapper[5108]: I1212 14:12:39.601207 5108 patch_prober.go:28] interesting pod/oauth-openshift-66458b6674-blzxz container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.25:6443/healthz\": dial tcp 10.217.0.25:6443: connect: connection refused" start-of-body= Dec 12 14:12:39 crc kubenswrapper[5108]: I1212 14:12:39.601272 5108 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-66458b6674-blzxz" podUID="9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.25:6443/healthz\": dial tcp 10.217.0.25:6443: connect: connection refused" Dec 12 14:12:39 crc kubenswrapper[5108]: I1212 14:12:39.605207 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-cljsg"] Dec 12 14:12:39 crc kubenswrapper[5108]: I1212 14:12:39.605260 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-wz7d2" event={"ID":"ae92c75f-face-43b3-8dd7-011d99508d20","Type":"ContainerStarted","Data":"8a7802b841d91d37a1c6edc2170154b2990e8bf3b234744550d03cab7147b97b"} Dec 12 14:12:39 crc kubenswrapper[5108]: I1212 14:12:39.613122 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-f6zss" podStartSLOduration=6.613072552 podStartE2EDuration="6.613072552s" podCreationTimestamp="2025-12-12 14:12:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 14:12:39.601611895 +0000 UTC m=+112.509603074" watchObservedRunningTime="2025-12-12 14:12:39.613072552 +0000 UTC m=+112.521063711" Dec 12 14:12:39 crc kubenswrapper[5108]: I1212 14:12:39.633137 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-87mjz" event={"ID":"f7426742-3adb-48e6-be7c-375e4860babe","Type":"ContainerStarted","Data":"2b20ebd2cc081545f0239eb269b076272d97c2d8e4dbe8d65d2acca361c5504b"} Dec 12 14:12:39 crc kubenswrapper[5108]: I1212 14:12:39.667348 5108 patch_prober.go:28] interesting pod/downloads-747b44746d-hrj8v container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.30:8080/\": dial tcp 10.217.0.30:8080: connect: connection refused" start-of-body= Dec 12 14:12:39 crc kubenswrapper[5108]: I1212 14:12:39.667444 5108 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-hrj8v" podUID="29c8dcea-f999-4e33-9f5e-ef9eb8a423f7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.30:8080/\": dial tcp 10.217.0.30:8080: connect: connection refused" Dec 12 14:12:39 crc kubenswrapper[5108]: I1212 14:12:39.677183 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/cni-sysctl-allowlist-ds-9bn92" podStartSLOduration=6.677166192 podStartE2EDuration="6.677166192s" podCreationTimestamp="2025-12-12 14:12:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 14:12:39.64576402 +0000 UTC m=+112.553755199" watchObservedRunningTime="2025-12-12 14:12:39.677166192 +0000 UTC m=+112.585157351" Dec 12 14:12:39 crc kubenswrapper[5108]: I1212 14:12:39.678415 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-c8mpt"] Dec 12 14:12:39 crc kubenswrapper[5108]: W1212 14:12:39.681359 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc45ee4a1_ebdf_4a47_853b_9cae5ac88246.slice/crio-e63cd7ce1040e0cc7c709089465eff507053e63022af4dcc2ee4c68b201b7e9d WatchSource:0}: Error finding container e63cd7ce1040e0cc7c709089465eff507053e63022af4dcc2ee4c68b201b7e9d: Status 404 returned error can't find the container with id e63cd7ce1040e0cc7c709089465eff507053e63022af4dcc2ee4c68b201b7e9d Dec 12 14:12:39 crc kubenswrapper[5108]: I1212 14:12:39.693819 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hdk9b\" (UID: \"78c92fa7-6dbe-4fef-8495-6dc6fe162b22\") " pod="openshift-image-registry/image-registry-66587d64c8-hdk9b" Dec 12 14:12:39 crc kubenswrapper[5108]: I1212 14:12:39.699896 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-8596bd845d-w9pnf" podStartSLOduration=92.699864041 podStartE2EDuration="1m32.699864041s" podCreationTimestamp="2025-12-12 14:11:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 14:12:39.69755173 +0000 UTC m=+112.605542899" watchObservedRunningTime="2025-12-12 14:12:39.699864041 +0000 UTC m=+112.607855200" Dec 12 14:12:39 crc kubenswrapper[5108]: I1212 14:12:39.700398 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-gw5g9"] Dec 12 14:12:39 crc kubenswrapper[5108]: E1212 14:12:39.701698 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:40.20168163 +0000 UTC m=+113.109672789 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hdk9b" (UID: "78c92fa7-6dbe-4fef-8495-6dc6fe162b22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:39 crc kubenswrapper[5108]: I1212 14:12:39.757856 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-7tkh8"] Dec 12 14:12:39 crc kubenswrapper[5108]: I1212 14:12:39.760855 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-dzxhh" podStartSLOduration=92.760841028 podStartE2EDuration="1m32.760841028s" podCreationTimestamp="2025-12-12 14:11:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 14:12:39.741023696 +0000 UTC m=+112.649014865" watchObservedRunningTime="2025-12-12 14:12:39.760841028 +0000 UTC m=+112.668832207" Dec 12 14:12:39 crc kubenswrapper[5108]: I1212 14:12:39.773485 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-wz7d2" podStartSLOduration=92.773446536 podStartE2EDuration="1m32.773446536s" podCreationTimestamp="2025-12-12 14:11:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 14:12:39.765489782 +0000 UTC m=+112.673480951" watchObservedRunningTime="2025-12-12 14:12:39.773446536 +0000 UTC m=+112.681437695" Dec 12 14:12:39 crc kubenswrapper[5108]: I1212 14:12:39.801576 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:39 crc kubenswrapper[5108]: E1212 14:12:39.801991 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:40.301969161 +0000 UTC m=+113.209960320 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:39 crc kubenswrapper[5108]: W1212 14:12:39.812745 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod72d68d2a_4326_4f35_b609_3d29fc70a888.slice/crio-3f819f59f3f3d5ed42d1e03a94f111f311fa43b773cd0fdf60f380541599407c WatchSource:0}: Error finding container 3f819f59f3f3d5ed42d1e03a94f111f311fa43b773cd0fdf60f380541599407c: Status 404 returned error can't find the container with id 3f819f59f3f3d5ed42d1e03a94f111f311fa43b773cd0fdf60f380541599407c Dec 12 14:12:39 crc kubenswrapper[5108]: I1212 14:12:39.814565 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-66458b6674-blzxz" podStartSLOduration=92.814542288 podStartE2EDuration="1m32.814542288s" podCreationTimestamp="2025-12-12 14:11:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 14:12:39.804436977 +0000 UTC m=+112.712428146" watchObservedRunningTime="2025-12-12 14:12:39.814542288 +0000 UTC m=+112.722533447" Dec 12 14:12:39 crc kubenswrapper[5108]: I1212 14:12:39.815689 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-8rxm9"] Dec 12 14:12:39 crc kubenswrapper[5108]: I1212 14:12:39.828677 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-fcvr7"] Dec 12 14:12:39 crc kubenswrapper[5108]: I1212 14:12:39.829782 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-zlssf"] Dec 12 14:12:39 crc kubenswrapper[5108]: I1212 14:12:39.908962 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hdk9b\" (UID: \"78c92fa7-6dbe-4fef-8495-6dc6fe162b22\") " pod="openshift-image-registry/image-registry-66587d64c8-hdk9b" Dec 12 14:12:39 crc kubenswrapper[5108]: E1212 14:12:39.909476 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:40.409462776 +0000 UTC m=+113.317453935 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hdk9b" (UID: "78c92fa7-6dbe-4fef-8495-6dc6fe162b22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:40 crc kubenswrapper[5108]: I1212 14:12:40.010388 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:40 crc kubenswrapper[5108]: E1212 14:12:40.011186 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:40.511166135 +0000 UTC m=+113.419157304 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:40 crc kubenswrapper[5108]: I1212 14:12:40.111991 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hdk9b\" (UID: \"78c92fa7-6dbe-4fef-8495-6dc6fe162b22\") " pod="openshift-image-registry/image-registry-66587d64c8-hdk9b" Dec 12 14:12:40 crc kubenswrapper[5108]: E1212 14:12:40.112635 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:40.612615577 +0000 UTC m=+113.520606736 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hdk9b" (UID: "78c92fa7-6dbe-4fef-8495-6dc6fe162b22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:40 crc kubenswrapper[5108]: I1212 14:12:40.202012 5108 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-lcqd6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 14:12:40 crc kubenswrapper[5108]: [-]has-synced failed: reason withheld Dec 12 14:12:40 crc kubenswrapper[5108]: [+]process-running ok Dec 12 14:12:40 crc kubenswrapper[5108]: healthz check failed Dec 12 14:12:40 crc kubenswrapper[5108]: I1212 14:12:40.202057 5108 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-lcqd6" podUID="b84e24d9-6489-40df-a5bc-a2f2e09fcbb7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 14:12:40 crc kubenswrapper[5108]: I1212 14:12:40.216646 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:40 crc kubenswrapper[5108]: E1212 14:12:40.216777 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:40.716755922 +0000 UTC m=+113.624747101 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:40 crc kubenswrapper[5108]: I1212 14:12:40.217205 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hdk9b\" (UID: \"78c92fa7-6dbe-4fef-8495-6dc6fe162b22\") " pod="openshift-image-registry/image-registry-66587d64c8-hdk9b" Dec 12 14:12:40 crc kubenswrapper[5108]: E1212 14:12:40.217521 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:40.717514632 +0000 UTC m=+113.625505791 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hdk9b" (UID: "78c92fa7-6dbe-4fef-8495-6dc6fe162b22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:40 crc kubenswrapper[5108]: I1212 14:12:40.324730 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:40 crc kubenswrapper[5108]: E1212 14:12:40.325002 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:40.824987837 +0000 UTC m=+113.732978986 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:40 crc kubenswrapper[5108]: I1212 14:12:40.328509 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-9bn92"] Dec 12 14:12:40 crc kubenswrapper[5108]: I1212 14:12:40.426615 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hdk9b\" (UID: \"78c92fa7-6dbe-4fef-8495-6dc6fe162b22\") " pod="openshift-image-registry/image-registry-66587d64c8-hdk9b" Dec 12 14:12:40 crc kubenswrapper[5108]: E1212 14:12:40.426894 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:40.9268828 +0000 UTC m=+113.834873959 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hdk9b" (UID: "78c92fa7-6dbe-4fef-8495-6dc6fe162b22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:40 crc kubenswrapper[5108]: I1212 14:12:40.528224 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:40 crc kubenswrapper[5108]: E1212 14:12:40.528719 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:41.028702933 +0000 UTC m=+113.936694082 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:40 crc kubenswrapper[5108]: I1212 14:12:40.630035 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hdk9b\" (UID: \"78c92fa7-6dbe-4fef-8495-6dc6fe162b22\") " pod="openshift-image-registry/image-registry-66587d64c8-hdk9b" Dec 12 14:12:40 crc kubenswrapper[5108]: E1212 14:12:40.630404 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:41.130387982 +0000 UTC m=+114.038379141 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hdk9b" (UID: "78c92fa7-6dbe-4fef-8495-6dc6fe162b22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:40 crc kubenswrapper[5108]: I1212 14:12:40.694698 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-blzxz" event={"ID":"9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26","Type":"ContainerStarted","Data":"2bdc3adf14961fcca4e288424a14d95f0a1dba6c91b73b18042b6a5bdd0429f1"} Dec 12 14:12:40 crc kubenswrapper[5108]: I1212 14:12:40.732622 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:40 crc kubenswrapper[5108]: E1212 14:12:40.733126 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:41.233108148 +0000 UTC m=+114.141099307 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:40 crc kubenswrapper[5108]: I1212 14:12:40.734393 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-bjwlp" event={"ID":"e91b78a2-592a-42d0-af84-eb81d184bfd7","Type":"ContainerStarted","Data":"67fc1830726bb8f62ed6d3201edb193248e259f5f94c8bff44f83e1e3e9296a6"} Dec 12 14:12:40 crc kubenswrapper[5108]: I1212 14:12:40.734443 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-bjwlp" event={"ID":"e91b78a2-592a-42d0-af84-eb81d184bfd7","Type":"ContainerStarted","Data":"666cbb9b060091d715b8e733228ef1f71701fb5230ea44014b8a310b56d1b77a"} Dec 12 14:12:40 crc kubenswrapper[5108]: I1212 14:12:40.753454 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-87mjz" event={"ID":"f7426742-3adb-48e6-be7c-375e4860babe","Type":"ContainerStarted","Data":"249fde46e1a78d592aad652d90ed35e7f465e77381f334ff4ad92257c6bc6067"} Dec 12 14:12:40 crc kubenswrapper[5108]: I1212 14:12:40.768577 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-g7zm7" event={"ID":"ea8bf4d2-1a9a-47e3-8013-e422b80a164b","Type":"ContainerStarted","Data":"e16ff8e013a0d53178db81011c8fd56435a11605c6879633a4770b5f508dc985"} Dec 12 14:12:40 crc kubenswrapper[5108]: I1212 14:12:40.768613 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-g7zm7" event={"ID":"ea8bf4d2-1a9a-47e3-8013-e422b80a164b","Type":"ContainerStarted","Data":"cd8b0d8f4a972c1f16de3b7b7680e57ef4a08861aea6da9195c56b0ab5ce7384"} Dec 12 14:12:40 crc kubenswrapper[5108]: I1212 14:12:40.772670 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-g7zm7" Dec 12 14:12:40 crc kubenswrapper[5108]: I1212 14:12:40.776283 5108 patch_prober.go:28] interesting pod/olm-operator-5cdf44d969-g7zm7 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.26:8443/healthz\": dial tcp 10.217.0.26:8443: connect: connection refused" start-of-body= Dec 12 14:12:40 crc kubenswrapper[5108]: I1212 14:12:40.776401 5108 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-g7zm7" podUID="ea8bf4d2-1a9a-47e3-8013-e422b80a164b" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.26:8443/healthz\": dial tcp 10.217.0.26:8443: connect: connection refused" Dec 12 14:12:40 crc kubenswrapper[5108]: I1212 14:12:40.809041 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-8596bd845d-w9pnf" Dec 12 14:12:40 crc kubenswrapper[5108]: I1212 14:12:40.809376 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-oauth-apiserver/apiserver-8596bd845d-w9pnf" Dec 12 14:12:40 crc kubenswrapper[5108]: I1212 14:12:40.823493 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-5k7p6" event={"ID":"4ec1fbc1-dd55-49ef-b374-28698de88e40","Type":"ContainerStarted","Data":"5481b96026f9cd7fcf9f7c925353af803759f554d864f55a5dfc42a74170612b"} Dec 12 14:12:40 crc kubenswrapper[5108]: I1212 14:12:40.823563 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-5k7p6" event={"ID":"4ec1fbc1-dd55-49ef-b374-28698de88e40","Type":"ContainerStarted","Data":"58e815ffb314f70deb11e2e5b6fde116cabd9b925e3898185acf3799fc0f1302"} Dec 12 14:12:40 crc kubenswrapper[5108]: I1212 14:12:40.836233 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hdk9b\" (UID: \"78c92fa7-6dbe-4fef-8495-6dc6fe162b22\") " pod="openshift-image-registry/image-registry-66587d64c8-hdk9b" Dec 12 14:12:40 crc kubenswrapper[5108]: E1212 14:12:40.838801 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:41.338786774 +0000 UTC m=+114.246777933 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hdk9b" (UID: "78c92fa7-6dbe-4fef-8495-6dc6fe162b22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:40 crc kubenswrapper[5108]: I1212 14:12:40.841939 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-9dtpr" event={"ID":"c24a841d-7735-4fc3-b1b8-df1af8ae4328","Type":"ContainerStarted","Data":"cb242c90eb6c87cec620980c1a92214f05ddc221158e7792d1fab14a563ccfd1"} Dec 12 14:12:40 crc kubenswrapper[5108]: I1212 14:12:40.845938 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-8596bd845d-w9pnf" Dec 12 14:12:40 crc kubenswrapper[5108]: I1212 14:12:40.871283 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-g7zm7" podStartSLOduration=93.871265186 podStartE2EDuration="1m33.871265186s" podCreationTimestamp="2025-12-12 14:11:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 14:12:40.808697236 +0000 UTC m=+113.716688415" watchObservedRunningTime="2025-12-12 14:12:40.871265186 +0000 UTC m=+113.779256345" Dec 12 14:12:40 crc kubenswrapper[5108]: I1212 14:12:40.872631 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-9ddfb9f55-5k7p6" podStartSLOduration=93.872621812 podStartE2EDuration="1m33.872621812s" podCreationTimestamp="2025-12-12 14:11:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 14:12:40.870510395 +0000 UTC m=+113.778501564" watchObservedRunningTime="2025-12-12 14:12:40.872621812 +0000 UTC m=+113.780612961" Dec 12 14:12:40 crc kubenswrapper[5108]: I1212 14:12:40.942908 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:40 crc kubenswrapper[5108]: E1212 14:12:40.943047 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:41.443024521 +0000 UTC m=+114.351015680 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:40 crc kubenswrapper[5108]: I1212 14:12:40.943322 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hdk9b\" (UID: \"78c92fa7-6dbe-4fef-8495-6dc6fe162b22\") " pod="openshift-image-registry/image-registry-66587d64c8-hdk9b" Dec 12 14:12:40 crc kubenswrapper[5108]: E1212 14:12:40.945673 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:41.445665942 +0000 UTC m=+114.353657101 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hdk9b" (UID: "78c92fa7-6dbe-4fef-8495-6dc6fe162b22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:40 crc kubenswrapper[5108]: I1212 14:12:40.961714 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-gw5g9" event={"ID":"c37b74b9-0534-4df9-9af9-a1c10e3a9b89","Type":"ContainerStarted","Data":"075f0fe4001b734a06a85c46349d45da94d7068acce7d6e0ad08deaf48a4dbef"} Dec 12 14:12:41 crc kubenswrapper[5108]: I1212 14:12:41.017524 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-77nhq" event={"ID":"5bcda332-2469-4e96-911f-6091f5ab33ee","Type":"ContainerStarted","Data":"5edfb666b79947f6eb8453826e40142b09b51eec1ea4619a2503a3d13abb9336"} Dec 12 14:12:41 crc kubenswrapper[5108]: I1212 14:12:41.056558 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:41 crc kubenswrapper[5108]: E1212 14:12:41.057097 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:41.557058952 +0000 UTC m=+114.465050111 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:41 crc kubenswrapper[5108]: I1212 14:12:41.057134 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-9dtpr" podStartSLOduration=94.057115873 podStartE2EDuration="1m34.057115873s" podCreationTimestamp="2025-12-12 14:11:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 14:12:40.972452121 +0000 UTC m=+113.880443290" watchObservedRunningTime="2025-12-12 14:12:41.057115873 +0000 UTC m=+113.965107032" Dec 12 14:12:41 crc kubenswrapper[5108]: I1212 14:12:41.058367 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-apiserver/apiserver-9ddfb9f55-5k7p6" Dec 12 14:12:41 crc kubenswrapper[5108]: I1212 14:12:41.058627 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-77nhq" podStartSLOduration=94.058619323 podStartE2EDuration="1m34.058619323s" podCreationTimestamp="2025-12-12 14:11:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 14:12:41.05737363 +0000 UTC m=+113.965364799" watchObservedRunningTime="2025-12-12 14:12:41.058619323 +0000 UTC m=+113.966610492" Dec 12 14:12:41 crc kubenswrapper[5108]: I1212 14:12:41.058737 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-9ddfb9f55-5k7p6" Dec 12 14:12:41 crc kubenswrapper[5108]: I1212 14:12:41.087875 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-kd9gm" event={"ID":"365b2743-bb33-4572-bd95-22945397200b","Type":"ContainerStarted","Data":"977d195f7f3b06bf00936b4d485dafa6ae1621be8f2e6c807312b3b7862455c3"} Dec 12 14:12:41 crc kubenswrapper[5108]: I1212 14:12:41.087922 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-kd9gm" event={"ID":"365b2743-bb33-4572-bd95-22945397200b","Type":"ContainerStarted","Data":"1ba2c4a97f45e9e6812e5f116ab02039840be2a57a28d4cbd70e6ec3b6e339ff"} Dec 12 14:12:41 crc kubenswrapper[5108]: I1212 14:12:41.118299 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29425800-msrsb" event={"ID":"e8da381c-e396-48dd-a445-40d51d56858d","Type":"ContainerStarted","Data":"f9d546fc4dcd8c79d4d75d65c3cd62ea79ce8a68421b3aac37842de83409c9e9"} Dec 12 14:12:41 crc kubenswrapper[5108]: I1212 14:12:41.118349 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29425800-msrsb" event={"ID":"e8da381c-e396-48dd-a445-40d51d56858d","Type":"ContainerStarted","Data":"036138a9cbc6a11510c97c5566588a062e7885407d64621ab351105054843c4c"} Dec 12 14:12:41 crc kubenswrapper[5108]: I1212 14:12:41.126888 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-k6vgk" event={"ID":"c45ee4a1-ebdf-4a47-853b-9cae5ac88246","Type":"ContainerStarted","Data":"e63cd7ce1040e0cc7c709089465eff507053e63022af4dcc2ee4c68b201b7e9d"} Dec 12 14:12:41 crc kubenswrapper[5108]: I1212 14:12:41.134229 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-qwvxc" event={"ID":"1000c2c0-f2b5-4e8b-b0c8-58ec52b524b1","Type":"ContainerStarted","Data":"47a05a9c9cbd9b0096be48ad2e1476c47e29fee83b9f63442bfa8b9c2fa502c9"} Dec 12 14:12:41 crc kubenswrapper[5108]: I1212 14:12:41.138929 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-w9pnf" event={"ID":"b2a054ba-6a30-47c2-b042-8e859282af9c","Type":"ContainerStarted","Data":"764ca32846445afa1e8e03cb4a1456f607d654d08e6489de1c6ccb04b0107861"} Dec 12 14:12:41 crc kubenswrapper[5108]: I1212 14:12:41.151228 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-74545575db-gdjh7" event={"ID":"e002b2da-aaa6-4bf5-93a0-0d08a9467038","Type":"ContainerStarted","Data":"1e7a91d0ce30c5c9514b1b6f7289f7ff8eaa0ef916bf300a0ecf5b8c58abfff6"} Dec 12 14:12:41 crc kubenswrapper[5108]: I1212 14:12:41.152517 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-8596bd845d-w9pnf" Dec 12 14:12:41 crc kubenswrapper[5108]: I1212 14:12:41.154912 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-kd9gm" podStartSLOduration=94.154900906 podStartE2EDuration="1m34.154900906s" podCreationTimestamp="2025-12-12 14:11:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 14:12:41.153987533 +0000 UTC m=+114.061978692" watchObservedRunningTime="2025-12-12 14:12:41.154900906 +0000 UTC m=+114.062892065" Dec 12 14:12:41 crc kubenswrapper[5108]: I1212 14:12:41.166774 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hdk9b\" (UID: \"78c92fa7-6dbe-4fef-8495-6dc6fe162b22\") " pod="openshift-image-registry/image-registry-66587d64c8-hdk9b" Dec 12 14:12:41 crc kubenswrapper[5108]: I1212 14:12:41.185872 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-cljsg" event={"ID":"cf226324-c47f-4cdc-9d31-85a1295236a5","Type":"ContainerStarted","Data":"1901adf7c5ae51b2e1d0d4d896c4dd108a8aefaec5626ffb6e235412dc43f0ab"} Dec 12 14:12:41 crc kubenswrapper[5108]: I1212 14:12:41.208178 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-dzxhh" event={"ID":"36f48e3e-03a0-42fc-ab1d-37c77fb10f65","Type":"ContainerStarted","Data":"c74ce42c817bb94f80039b32e226a914468d48001d7213a1409b5035e5d4e334"} Dec 12 14:12:41 crc kubenswrapper[5108]: I1212 14:12:41.214796 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-8rxm9" event={"ID":"77db1b7a-74ff-4cbd-aacf-ac295b36d84c","Type":"ContainerStarted","Data":"4dec7c704568003f11170ea058f38368a87b2e992b291b7cb82612f5c3f00ff7"} Dec 12 14:12:41 crc kubenswrapper[5108]: I1212 14:12:41.218464 5108 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-lcqd6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 14:12:41 crc kubenswrapper[5108]: [-]has-synced failed: reason withheld Dec 12 14:12:41 crc kubenswrapper[5108]: [+]process-running ok Dec 12 14:12:41 crc kubenswrapper[5108]: healthz check failed Dec 12 14:12:41 crc kubenswrapper[5108]: I1212 14:12:41.218975 5108 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-lcqd6" podUID="b84e24d9-6489-40df-a5bc-a2f2e09fcbb7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 14:12:41 crc kubenswrapper[5108]: E1212 14:12:41.220438 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:41.720407695 +0000 UTC m=+114.628398854 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hdk9b" (UID: "78c92fa7-6dbe-4fef-8495-6dc6fe162b22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:41 crc kubenswrapper[5108]: I1212 14:12:41.245142 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-qwvxc" podStartSLOduration=94.245126798 podStartE2EDuration="1m34.245126798s" podCreationTimestamp="2025-12-12 14:11:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 14:12:41.244560862 +0000 UTC m=+114.152552021" watchObservedRunningTime="2025-12-12 14:12:41.245126798 +0000 UTC m=+114.153117957" Dec 12 14:12:41 crc kubenswrapper[5108]: I1212 14:12:41.245667 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-74545575db-gdjh7" podStartSLOduration=94.245656692 podStartE2EDuration="1m34.245656692s" podCreationTimestamp="2025-12-12 14:11:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 14:12:41.197239803 +0000 UTC m=+114.105230972" watchObservedRunningTime="2025-12-12 14:12:41.245656692 +0000 UTC m=+114.153647841" Dec 12 14:12:41 crc kubenswrapper[5108]: I1212 14:12:41.251252 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-zlssf" event={"ID":"13a39891-aaca-425b-adee-d73d8c5f7bcd","Type":"ContainerStarted","Data":"d6f9d955eff5cf21029797cd6e9935082861f90099708373f4980898c9d64d2f"} Dec 12 14:12:41 crc kubenswrapper[5108]: I1212 14:12:41.255369 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-6dfcz" event={"ID":"f7722b55-5f8c-4480-8ad6-af633ecad9d2","Type":"ContainerStarted","Data":"5b123aabc97ae1664be283991d6174741b8d101a232161687e0bdc3b745a6cf8"} Dec 12 14:12:41 crc kubenswrapper[5108]: I1212 14:12:41.255407 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-6dfcz" event={"ID":"f7722b55-5f8c-4480-8ad6-af633ecad9d2","Type":"ContainerStarted","Data":"8ea5e86496ef101ae3bc839367c559f04607ce54da5f88382db8f73448bf8217"} Dec 12 14:12:41 crc kubenswrapper[5108]: I1212 14:12:41.258099 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-8gpql" event={"ID":"02ff303a-02e9-48e1-a9c5-919ddbc4988a","Type":"ContainerStarted","Data":"8fced7b6b4ec9819bebabf00f974304658d700885a492485d2d3b54b35cdbee9"} Dec 12 14:12:41 crc kubenswrapper[5108]: I1212 14:12:41.258131 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-8gpql" event={"ID":"02ff303a-02e9-48e1-a9c5-919ddbc4988a","Type":"ContainerStarted","Data":"cd58a7ad616d58a10d142a75ea610d3908bc86c641d6c0c9313b1239049ae386"} Dec 12 14:12:41 crc kubenswrapper[5108]: I1212 14:12:41.259471 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-fcvr7" event={"ID":"aa7e1be7-f229-433f-812a-b47e2c151895","Type":"ContainerStarted","Data":"1b471d9c5b8b5418c12c5f2b5dea4fbf26a7d939902ca9704ee01e9631f61362"} Dec 12 14:12:41 crc kubenswrapper[5108]: I1212 14:12:41.270180 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:41 crc kubenswrapper[5108]: E1212 14:12:41.272406 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:41.772385529 +0000 UTC m=+114.680376688 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:41 crc kubenswrapper[5108]: I1212 14:12:41.275917 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29425800-msrsb" podStartSLOduration=94.275895244 podStartE2EDuration="1m34.275895244s" podCreationTimestamp="2025-12-12 14:11:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 14:12:41.268634129 +0000 UTC m=+114.176625288" watchObservedRunningTime="2025-12-12 14:12:41.275895244 +0000 UTC m=+114.183886403" Dec 12 14:12:41 crc kubenswrapper[5108]: I1212 14:12:41.296538 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-msx9x" event={"ID":"be473823-0596-45e5-a2fd-c6016d361d20","Type":"ContainerStarted","Data":"e238caeab4940c1ad3831d9e86ab60f3564eeb4a02ab29759ea39291a0e965b8"} Dec 12 14:12:41 crc kubenswrapper[5108]: I1212 14:12:41.296584 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-msx9x" event={"ID":"be473823-0596-45e5-a2fd-c6016d361d20","Type":"ContainerStarted","Data":"6738dfe5ea1a2b85954b17347b158996c64420c09715e7e20d99038d75d24f72"} Dec 12 14:12:41 crc kubenswrapper[5108]: I1212 14:12:41.298517 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-msx9x" Dec 12 14:12:41 crc kubenswrapper[5108]: I1212 14:12:41.316335 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-c8mpt" event={"ID":"0222cddf-6d62-4ba5-924b-f1f64d892c84","Type":"ContainerStarted","Data":"26205addf6367feb7828148814d0f7b7d3fcac3067535e88a4fba8c12a987fb5"} Dec 12 14:12:41 crc kubenswrapper[5108]: I1212 14:12:41.327349 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-69b85846b6-r8lf9" event={"ID":"2431091b-9e2c-4eb0-993a-a6893cb79df1","Type":"ContainerStarted","Data":"551ae1755e6bd689aaf14ece7408c4ec2a74b4109d4786c54646e10c07cb4870"} Dec 12 14:12:41 crc kubenswrapper[5108]: I1212 14:12:41.359916 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-c7wpt" event={"ID":"c1d808ae-6ef1-4996-8f03-fa3102c0d165","Type":"ContainerStarted","Data":"724fdc230c55e7bf9a4637d678dea8a135992d4232a3d6b595d437f1c4c09871"} Dec 12 14:12:41 crc kubenswrapper[5108]: I1212 14:12:41.359967 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-c7wpt" event={"ID":"c1d808ae-6ef1-4996-8f03-fa3102c0d165","Type":"ContainerStarted","Data":"bc449c44132ac799d00d4986e307290f0b430b22bae04f6a88de814c0d290a4d"} Dec 12 14:12:41 crc kubenswrapper[5108]: I1212 14:12:41.360810 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-8gpql" podStartSLOduration=94.360794642 podStartE2EDuration="1m34.360794642s" podCreationTimestamp="2025-12-12 14:11:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 14:12:41.337864037 +0000 UTC m=+114.245855196" watchObservedRunningTime="2025-12-12 14:12:41.360794642 +0000 UTC m=+114.268785801" Dec 12 14:12:41 crc kubenswrapper[5108]: I1212 14:12:41.361127 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-fcvr7" podStartSLOduration=8.361121891 podStartE2EDuration="8.361121891s" podCreationTimestamp="2025-12-12 14:12:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 14:12:41.360965837 +0000 UTC m=+114.268957006" watchObservedRunningTime="2025-12-12 14:12:41.361121891 +0000 UTC m=+114.269113060" Dec 12 14:12:41 crc kubenswrapper[5108]: I1212 14:12:41.362125 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-l5t96" event={"ID":"378c9b4b-6598-489c-9af4-b776c79341f6","Type":"ContainerStarted","Data":"da0cdb6c8a79d498fe6f1fc177d7e076f7294e841470d4e790506397d0fbda6c"} Dec 12 14:12:41 crc kubenswrapper[5108]: I1212 14:12:41.362154 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-l5t96" event={"ID":"378c9b4b-6598-489c-9af4-b776c79341f6","Type":"ContainerStarted","Data":"c646abea4c0fb59fba13aac90afe0b60fc08dda1694f6701aa7ff0f882bb9abe"} Dec 12 14:12:41 crc kubenswrapper[5108]: I1212 14:12:41.362766 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-l5t96" Dec 12 14:12:41 crc kubenswrapper[5108]: I1212 14:12:41.372612 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hdk9b\" (UID: \"78c92fa7-6dbe-4fef-8495-6dc6fe162b22\") " pod="openshift-image-registry/image-registry-66587d64c8-hdk9b" Dec 12 14:12:41 crc kubenswrapper[5108]: I1212 14:12:41.372636 5108 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-l5t96 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.20:8080/healthz\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Dec 12 14:12:41 crc kubenswrapper[5108]: I1212 14:12:41.372690 5108 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-547dbd544d-l5t96" podUID="378c9b4b-6598-489c-9af4-b776c79341f6" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.20:8080/healthz\": dial tcp 10.217.0.20:8080: connect: connection refused" Dec 12 14:12:41 crc kubenswrapper[5108]: E1212 14:12:41.373206 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:41.873190494 +0000 UTC m=+114.781181653 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hdk9b" (UID: "78c92fa7-6dbe-4fef-8495-6dc6fe162b22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:41 crc kubenswrapper[5108]: I1212 14:12:41.388485 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-hzphx" event={"ID":"81011d2a-11c4-491a-beeb-7af27e94f011","Type":"ContainerStarted","Data":"5e56e220fa56f99da1fd7575c28a3e52e9718eb223b2da24a4fb921e7f621b86"} Dec 12 14:12:41 crc kubenswrapper[5108]: I1212 14:12:41.388545 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-hzphx" event={"ID":"81011d2a-11c4-491a-beeb-7af27e94f011","Type":"ContainerStarted","Data":"4a643193d9d2978b5bc31f77e465be063e346a8d150483018384c0980d705a85"} Dec 12 14:12:41 crc kubenswrapper[5108]: I1212 14:12:41.406375 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-6dfcz" podStartSLOduration=94.406350714 podStartE2EDuration="1m34.406350714s" podCreationTimestamp="2025-12-12 14:11:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 14:12:41.405712798 +0000 UTC m=+114.313703967" watchObservedRunningTime="2025-12-12 14:12:41.406350714 +0000 UTC m=+114.314341863" Dec 12 14:12:41 crc kubenswrapper[5108]: I1212 14:12:41.466498 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-7tkh8" event={"ID":"72d68d2a-4326-4f35-b609-3d29fc70a888","Type":"ContainerStarted","Data":"3f819f59f3f3d5ed42d1e03a94f111f311fa43b773cd0fdf60f380541599407c"} Dec 12 14:12:41 crc kubenswrapper[5108]: I1212 14:12:41.474972 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:41 crc kubenswrapper[5108]: E1212 14:12:41.479370 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:41.979347483 +0000 UTC m=+114.887338652 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:41 crc kubenswrapper[5108]: I1212 14:12:41.480070 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hdk9b\" (UID: \"78c92fa7-6dbe-4fef-8495-6dc6fe162b22\") " pod="openshift-image-registry/image-registry-66587d64c8-hdk9b" Dec 12 14:12:41 crc kubenswrapper[5108]: I1212 14:12:41.482107 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-5777786469-zsrsd" Dec 12 14:12:41 crc kubenswrapper[5108]: E1212 14:12:41.483687 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:41.983672909 +0000 UTC m=+114.891664068 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hdk9b" (UID: "78c92fa7-6dbe-4fef-8495-6dc6fe162b22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:41 crc kubenswrapper[5108]: I1212 14:12:41.488149 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-547dbd544d-l5t96" podStartSLOduration=94.488125948 podStartE2EDuration="1m34.488125948s" podCreationTimestamp="2025-12-12 14:11:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 14:12:41.487358698 +0000 UTC m=+114.395349867" watchObservedRunningTime="2025-12-12 14:12:41.488125948 +0000 UTC m=+114.396117107" Dec 12 14:12:41 crc kubenswrapper[5108]: I1212 14:12:41.489832 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-hzphx" podStartSLOduration=94.489824725 podStartE2EDuration="1m34.489824725s" podCreationTimestamp="2025-12-12 14:11:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 14:12:41.445657649 +0000 UTC m=+114.353648818" watchObservedRunningTime="2025-12-12 14:12:41.489824725 +0000 UTC m=+114.397815884" Dec 12 14:12:41 crc kubenswrapper[5108]: I1212 14:12:41.564374 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-msx9x" podStartSLOduration=94.564337394 podStartE2EDuration="1m34.564337394s" podCreationTimestamp="2025-12-12 14:11:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 14:12:41.523727804 +0000 UTC m=+114.431718993" watchObservedRunningTime="2025-12-12 14:12:41.564337394 +0000 UTC m=+114.472328553" Dec 12 14:12:41 crc kubenswrapper[5108]: I1212 14:12:41.580975 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:41 crc kubenswrapper[5108]: E1212 14:12:41.582984 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:42.082965614 +0000 UTC m=+114.990956773 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:41 crc kubenswrapper[5108]: I1212 14:12:41.638783 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-c7wpt" podStartSLOduration=94.638764631 podStartE2EDuration="1m34.638764631s" podCreationTimestamp="2025-12-12 14:11:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 14:12:41.566907723 +0000 UTC m=+114.474898882" watchObservedRunningTime="2025-12-12 14:12:41.638764631 +0000 UTC m=+114.546755790" Dec 12 14:12:41 crc kubenswrapper[5108]: I1212 14:12:41.638993 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-69b85846b6-r8lf9" podStartSLOduration=94.638986747 podStartE2EDuration="1m34.638986747s" podCreationTimestamp="2025-12-12 14:11:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 14:12:41.637446006 +0000 UTC m=+114.545437175" watchObservedRunningTime="2025-12-12 14:12:41.638986747 +0000 UTC m=+114.546977916" Dec 12 14:12:41 crc kubenswrapper[5108]: I1212 14:12:41.683559 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hdk9b\" (UID: \"78c92fa7-6dbe-4fef-8495-6dc6fe162b22\") " pod="openshift-image-registry/image-registry-66587d64c8-hdk9b" Dec 12 14:12:41 crc kubenswrapper[5108]: E1212 14:12:41.684103 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:42.184064276 +0000 UTC m=+115.092055435 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hdk9b" (UID: "78c92fa7-6dbe-4fef-8495-6dc6fe162b22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:41 crc kubenswrapper[5108]: I1212 14:12:41.697102 5108 patch_prober.go:28] interesting pod/oauth-openshift-66458b6674-blzxz container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.25:6443/healthz\": context deadline exceeded" start-of-body= Dec 12 14:12:41 crc kubenswrapper[5108]: I1212 14:12:41.697192 5108 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-66458b6674-blzxz" podUID="9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.25:6443/healthz\": context deadline exceeded" Dec 12 14:12:41 crc kubenswrapper[5108]: I1212 14:12:41.785619 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:41 crc kubenswrapper[5108]: E1212 14:12:41.786325 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:42.2863008 +0000 UTC m=+115.194291959 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:41 crc kubenswrapper[5108]: I1212 14:12:41.887649 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hdk9b\" (UID: \"78c92fa7-6dbe-4fef-8495-6dc6fe162b22\") " pod="openshift-image-registry/image-registry-66587d64c8-hdk9b" Dec 12 14:12:41 crc kubenswrapper[5108]: E1212 14:12:41.887944 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:42.387932738 +0000 UTC m=+115.295923897 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hdk9b" (UID: "78c92fa7-6dbe-4fef-8495-6dc6fe162b22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:41 crc kubenswrapper[5108]: I1212 14:12:41.991485 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:41 crc kubenswrapper[5108]: E1212 14:12:41.992019 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:42.49200412 +0000 UTC m=+115.399995279 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:42 crc kubenswrapper[5108]: I1212 14:12:42.099875 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hdk9b\" (UID: \"78c92fa7-6dbe-4fef-8495-6dc6fe162b22\") " pod="openshift-image-registry/image-registry-66587d64c8-hdk9b" Dec 12 14:12:42 crc kubenswrapper[5108]: E1212 14:12:42.100301 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:42.600286317 +0000 UTC m=+115.508277476 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hdk9b" (UID: "78c92fa7-6dbe-4fef-8495-6dc6fe162b22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:42 crc kubenswrapper[5108]: I1212 14:12:42.189151 5108 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-lcqd6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 14:12:42 crc kubenswrapper[5108]: [-]has-synced failed: reason withheld Dec 12 14:12:42 crc kubenswrapper[5108]: [+]process-running ok Dec 12 14:12:42 crc kubenswrapper[5108]: healthz check failed Dec 12 14:12:42 crc kubenswrapper[5108]: I1212 14:12:42.189226 5108 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-lcqd6" podUID="b84e24d9-6489-40df-a5bc-a2f2e09fcbb7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 14:12:42 crc kubenswrapper[5108]: I1212 14:12:42.200747 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:42 crc kubenswrapper[5108]: E1212 14:12:42.200924 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:42.700897416 +0000 UTC m=+115.608888575 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:42 crc kubenswrapper[5108]: I1212 14:12:42.201401 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hdk9b\" (UID: \"78c92fa7-6dbe-4fef-8495-6dc6fe162b22\") " pod="openshift-image-registry/image-registry-66587d64c8-hdk9b" Dec 12 14:12:42 crc kubenswrapper[5108]: E1212 14:12:42.201675 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:42.701668377 +0000 UTC m=+115.609659536 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hdk9b" (UID: "78c92fa7-6dbe-4fef-8495-6dc6fe162b22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:42 crc kubenswrapper[5108]: I1212 14:12:42.221354 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-msx9x" Dec 12 14:12:42 crc kubenswrapper[5108]: I1212 14:12:42.302235 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:42 crc kubenswrapper[5108]: E1212 14:12:42.302483 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:42.802449981 +0000 UTC m=+115.710441150 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:42 crc kubenswrapper[5108]: I1212 14:12:42.302634 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hdk9b\" (UID: \"78c92fa7-6dbe-4fef-8495-6dc6fe162b22\") " pod="openshift-image-registry/image-registry-66587d64c8-hdk9b" Dec 12 14:12:42 crc kubenswrapper[5108]: E1212 14:12:42.302983 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:42.802968015 +0000 UTC m=+115.710959254 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hdk9b" (UID: "78c92fa7-6dbe-4fef-8495-6dc6fe162b22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:42 crc kubenswrapper[5108]: I1212 14:12:42.403900 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:42 crc kubenswrapper[5108]: E1212 14:12:42.404072 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:42.904046788 +0000 UTC m=+115.812037947 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:42 crc kubenswrapper[5108]: I1212 14:12:42.404441 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hdk9b\" (UID: \"78c92fa7-6dbe-4fef-8495-6dc6fe162b22\") " pod="openshift-image-registry/image-registry-66587d64c8-hdk9b" Dec 12 14:12:42 crc kubenswrapper[5108]: E1212 14:12:42.404779 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:42.904772337 +0000 UTC m=+115.812763496 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hdk9b" (UID: "78c92fa7-6dbe-4fef-8495-6dc6fe162b22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:42 crc kubenswrapper[5108]: I1212 14:12:42.505690 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:42 crc kubenswrapper[5108]: E1212 14:12:42.506038 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:43.006009783 +0000 UTC m=+115.914000942 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:42 crc kubenswrapper[5108]: I1212 14:12:42.506325 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hdk9b\" (UID: \"78c92fa7-6dbe-4fef-8495-6dc6fe162b22\") " pod="openshift-image-registry/image-registry-66587d64c8-hdk9b" Dec 12 14:12:42 crc kubenswrapper[5108]: E1212 14:12:42.506702 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:43.006685962 +0000 UTC m=+115.914677121 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hdk9b" (UID: "78c92fa7-6dbe-4fef-8495-6dc6fe162b22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:42 crc kubenswrapper[5108]: I1212 14:12:42.597325 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-fcvr7" event={"ID":"aa7e1be7-f229-433f-812a-b47e2c151895","Type":"ContainerStarted","Data":"b1aa63ea72b34dbf9ff081a27045562b20ed121817d9042ac8fe3814a70351d8"} Dec 12 14:12:42 crc kubenswrapper[5108]: I1212 14:12:42.609160 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:42 crc kubenswrapper[5108]: E1212 14:12:42.609618 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:43.109600103 +0000 UTC m=+116.017591262 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:42 crc kubenswrapper[5108]: I1212 14:12:42.612116 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-c8mpt" event={"ID":"0222cddf-6d62-4ba5-924b-f1f64d892c84","Type":"ContainerStarted","Data":"fe9cb6cba7059bb430507febc50e225b01254532b0c454e4eee25d6220f09acc"} Dec 12 14:12:42 crc kubenswrapper[5108]: I1212 14:12:42.612170 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-c8mpt" event={"ID":"0222cddf-6d62-4ba5-924b-f1f64d892c84","Type":"ContainerStarted","Data":"a9c5070eefa8f9b2b16994fc13df2ac303795861a36fa67266b2064532222aa5"} Dec 12 14:12:42 crc kubenswrapper[5108]: I1212 14:12:42.612211 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-dns/dns-default-c8mpt" Dec 12 14:12:42 crc kubenswrapper[5108]: I1212 14:12:42.613817 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-69b85846b6-r8lf9" event={"ID":"2431091b-9e2c-4eb0-993a-a6893cb79df1","Type":"ContainerStarted","Data":"30c144578a5b03453dccf470babf18d7fb209f115e91cb67a2ed816629818a1a"} Dec 12 14:12:42 crc kubenswrapper[5108]: I1212 14:12:42.619780 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-c7wpt" event={"ID":"c1d808ae-6ef1-4996-8f03-fa3102c0d165","Type":"ContainerStarted","Data":"9b145db139e43815d8cfe73a41ea20b56dcf709a63fdb79036de1eea2d24ea1b"} Dec 12 14:12:42 crc kubenswrapper[5108]: I1212 14:12:42.621191 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-7tkh8" event={"ID":"72d68d2a-4326-4f35-b609-3d29fc70a888","Type":"ContainerStarted","Data":"4ec490584933228c16946efa270d4b16a0b812708ea39b9bbdbfbee476d9396b"} Dec 12 14:12:42 crc kubenswrapper[5108]: I1212 14:12:42.623353 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-bjwlp" event={"ID":"e91b78a2-592a-42d0-af84-eb81d184bfd7","Type":"ContainerStarted","Data":"18276bb21109930845c5a14c6882a1635684a844b6f4f2332b75e2ab7140d14b"} Dec 12 14:12:42 crc kubenswrapper[5108]: I1212 14:12:42.623698 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-bjwlp" Dec 12 14:12:42 crc kubenswrapper[5108]: I1212 14:12:42.625784 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-87mjz" event={"ID":"f7426742-3adb-48e6-be7c-375e4860babe","Type":"ContainerStarted","Data":"9d35bc7e3ecb6d809cdb87b1b444a96f90b98c82e5046b708c9de98fb660bd19"} Dec 12 14:12:42 crc kubenswrapper[5108]: I1212 14:12:42.628179 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-77nhq" event={"ID":"5bcda332-2469-4e96-911f-6091f5ab33ee","Type":"ContainerStarted","Data":"4c6ca8f4bc1c0a30dafc36eedb98b29974c6845c04f75029cb6236bf5038d308"} Dec 12 14:12:42 crc kubenswrapper[5108]: I1212 14:12:42.631437 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-k6vgk" event={"ID":"c45ee4a1-ebdf-4a47-853b-9cae5ac88246","Type":"ContainerStarted","Data":"cdea3dceef9ff4e7b68552de6f63315b564168bcbac17931bb29b538aea4c7ef"} Dec 12 14:12:42 crc kubenswrapper[5108]: I1212 14:12:42.631482 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-k6vgk" event={"ID":"c45ee4a1-ebdf-4a47-853b-9cae5ac88246","Type":"ContainerStarted","Data":"8e0036a843855768b60dbe91623fc46ab6d869d97d9108877d2ba76a4c8037fa"} Dec 12 14:12:42 crc kubenswrapper[5108]: I1212 14:12:42.638439 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-c8mpt" podStartSLOduration=9.638428077 podStartE2EDuration="9.638428077s" podCreationTimestamp="2025-12-12 14:12:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 14:12:42.637674786 +0000 UTC m=+115.545665965" watchObservedRunningTime="2025-12-12 14:12:42.638428077 +0000 UTC m=+115.546419226" Dec 12 14:12:42 crc kubenswrapper[5108]: I1212 14:12:42.641524 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-qwvxc" event={"ID":"1000c2c0-f2b5-4e8b-b0c8-58ec52b524b1","Type":"ContainerStarted","Data":"4b62c6d21dc13de0e2247963dd0df31e8ce42f3bb5e6cd2003136533e28abb0c"} Dec 12 14:12:42 crc kubenswrapper[5108]: I1212 14:12:42.646301 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-74545575db-gdjh7" event={"ID":"e002b2da-aaa6-4bf5-93a0-0d08a9467038","Type":"ContainerStarted","Data":"314d1825c0238ef9cdc542e02bfc0e966e2bb8b006879191a7c4999ac79e068c"} Dec 12 14:12:42 crc kubenswrapper[5108]: I1212 14:12:42.649578 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-cljsg" event={"ID":"cf226324-c47f-4cdc-9d31-85a1295236a5","Type":"ContainerStarted","Data":"759d2eda82e25a4760172f69731eeac46acf65256b4c5b72dfbd153e5ef9a0b9"} Dec 12 14:12:42 crc kubenswrapper[5108]: I1212 14:12:42.655436 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-8rxm9" event={"ID":"77db1b7a-74ff-4cbd-aacf-ac295b36d84c","Type":"ContainerStarted","Data":"9a4cc9078aaa3832cda3223af5a86dde36545d2bce1982ca9d267f2c2a72e8fd"} Dec 12 14:12:42 crc kubenswrapper[5108]: I1212 14:12:42.661412 5108 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-l5t96 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.20:8080/healthz\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Dec 12 14:12:42 crc kubenswrapper[5108]: I1212 14:12:42.661465 5108 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-547dbd544d-l5t96" podUID="378c9b4b-6598-489c-9af4-b776c79341f6" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.20:8080/healthz\": dial tcp 10.217.0.20:8080: connect: connection refused" Dec 12 14:12:42 crc kubenswrapper[5108]: I1212 14:12:42.662368 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-multus/cni-sysctl-allowlist-ds-9bn92" podUID="fab2bff9-a63d-4213-b55b-c19d14831aa5" containerName="kube-multus-additional-cni-plugins" containerID="cri-o://ab4cc74fd6f191761fb8e5e4a4df86e35a7ee6203fb2b8458d6522d41b931076" gracePeriod=30 Dec 12 14:12:42 crc kubenswrapper[5108]: I1212 14:12:42.662672 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-zlssf" event={"ID":"13a39891-aaca-425b-adee-d73d8c5f7bcd","Type":"ContainerStarted","Data":"8de0b98c5aec5145a85b0822d78aaec5c6d58415fbed7b4772f9bd02ed4ab1d5"} Dec 12 14:12:42 crc kubenswrapper[5108]: I1212 14:12:42.664153 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-zlssf" Dec 12 14:12:42 crc kubenswrapper[5108]: I1212 14:12:42.675236 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-66458b6674-blzxz" Dec 12 14:12:42 crc kubenswrapper[5108]: I1212 14:12:42.677441 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-zlssf" Dec 12 14:12:42 crc kubenswrapper[5108]: I1212 14:12:42.681901 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-g7zm7" Dec 12 14:12:42 crc kubenswrapper[5108]: I1212 14:12:42.700597 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-7tkh8" podStartSLOduration=95.700582455 podStartE2EDuration="1m35.700582455s" podCreationTimestamp="2025-12-12 14:11:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 14:12:42.699518837 +0000 UTC m=+115.607510006" watchObservedRunningTime="2025-12-12 14:12:42.700582455 +0000 UTC m=+115.608573614" Dec 12 14:12:42 crc kubenswrapper[5108]: I1212 14:12:42.711331 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hdk9b\" (UID: \"78c92fa7-6dbe-4fef-8495-6dc6fe162b22\") " pod="openshift-image-registry/image-registry-66587d64c8-hdk9b" Dec 12 14:12:42 crc kubenswrapper[5108]: E1212 14:12:42.717606 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:43.217593062 +0000 UTC m=+116.125584221 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hdk9b" (UID: "78c92fa7-6dbe-4fef-8495-6dc6fe162b22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:42 crc kubenswrapper[5108]: I1212 14:12:42.751815 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-bjwlp" podStartSLOduration=95.751792379 podStartE2EDuration="1m35.751792379s" podCreationTimestamp="2025-12-12 14:11:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 14:12:42.730806646 +0000 UTC m=+115.638797825" watchObservedRunningTime="2025-12-12 14:12:42.751792379 +0000 UTC m=+115.659783558" Dec 12 14:12:42 crc kubenswrapper[5108]: I1212 14:12:42.752073 5108 patch_prober.go:28] interesting pod/apiserver-9ddfb9f55-5k7p6 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Dec 12 14:12:42 crc kubenswrapper[5108]: [+]log ok Dec 12 14:12:42 crc kubenswrapper[5108]: [+]etcd ok Dec 12 14:12:42 crc kubenswrapper[5108]: [+]poststarthook/start-apiserver-admission-initializer ok Dec 12 14:12:42 crc kubenswrapper[5108]: [+]poststarthook/generic-apiserver-start-informers ok Dec 12 14:12:42 crc kubenswrapper[5108]: [+]poststarthook/max-in-flight-filter ok Dec 12 14:12:42 crc kubenswrapper[5108]: [+]poststarthook/storage-object-count-tracker-hook ok Dec 12 14:12:42 crc kubenswrapper[5108]: [+]poststarthook/image.openshift.io-apiserver-caches ok Dec 12 14:12:42 crc kubenswrapper[5108]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Dec 12 14:12:42 crc kubenswrapper[5108]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Dec 12 14:12:42 crc kubenswrapper[5108]: [+]poststarthook/project.openshift.io-projectcache ok Dec 12 14:12:42 crc kubenswrapper[5108]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Dec 12 14:12:42 crc kubenswrapper[5108]: [+]poststarthook/openshift.io-startinformers ok Dec 12 14:12:42 crc kubenswrapper[5108]: [+]poststarthook/openshift.io-restmapperupdater ok Dec 12 14:12:42 crc kubenswrapper[5108]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Dec 12 14:12:42 crc kubenswrapper[5108]: livez check failed Dec 12 14:12:42 crc kubenswrapper[5108]: I1212 14:12:42.752161 5108 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-9ddfb9f55-5k7p6" podUID="4ec1fbc1-dd55-49ef-b374-28698de88e40" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 14:12:42 crc kubenswrapper[5108]: I1212 14:12:42.815250 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:42 crc kubenswrapper[5108]: E1212 14:12:42.815982 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:43.315957701 +0000 UTC m=+116.223948860 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:42 crc kubenswrapper[5108]: I1212 14:12:42.816447 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hdk9b\" (UID: \"78c92fa7-6dbe-4fef-8495-6dc6fe162b22\") " pod="openshift-image-registry/image-registry-66587d64c8-hdk9b" Dec 12 14:12:42 crc kubenswrapper[5108]: E1212 14:12:42.816716 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:43.316703721 +0000 UTC m=+116.224694880 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hdk9b" (UID: "78c92fa7-6dbe-4fef-8495-6dc6fe162b22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:42 crc kubenswrapper[5108]: I1212 14:12:42.881133 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-69db94689b-k6vgk" podStartSLOduration=95.881115479 podStartE2EDuration="1m35.881115479s" podCreationTimestamp="2025-12-12 14:11:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 14:12:42.815512689 +0000 UTC m=+115.723503848" watchObservedRunningTime="2025-12-12 14:12:42.881115479 +0000 UTC m=+115.789106638" Dec 12 14:12:42 crc kubenswrapper[5108]: I1212 14:12:42.920098 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:42 crc kubenswrapper[5108]: E1212 14:12:42.920469 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:43.420452915 +0000 UTC m=+116.328444074 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:43 crc kubenswrapper[5108]: I1212 14:12:43.037260 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hdk9b\" (UID: \"78c92fa7-6dbe-4fef-8495-6dc6fe162b22\") " pod="openshift-image-registry/image-registry-66587d64c8-hdk9b" Dec 12 14:12:43 crc kubenswrapper[5108]: E1212 14:12:43.037555 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:43.537543197 +0000 UTC m=+116.445534356 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hdk9b" (UID: "78c92fa7-6dbe-4fef-8495-6dc6fe162b22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:43 crc kubenswrapper[5108]: I1212 14:12:43.070648 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-799b87ffcd-87mjz" podStartSLOduration=96.070632715 podStartE2EDuration="1m36.070632715s" podCreationTimestamp="2025-12-12 14:11:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 14:12:42.883177945 +0000 UTC m=+115.791169124" watchObservedRunningTime="2025-12-12 14:12:43.070632715 +0000 UTC m=+115.978623874" Dec 12 14:12:43 crc kubenswrapper[5108]: I1212 14:12:43.072559 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-8rxm9" podStartSLOduration=96.072550897 podStartE2EDuration="1m36.072550897s" podCreationTimestamp="2025-12-12 14:11:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 14:12:43.069872345 +0000 UTC m=+115.977863504" watchObservedRunningTime="2025-12-12 14:12:43.072550897 +0000 UTC m=+115.980542056" Dec 12 14:12:43 crc kubenswrapper[5108]: I1212 14:12:43.138816 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:43 crc kubenswrapper[5108]: E1212 14:12:43.139380 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:43.6393567 +0000 UTC m=+116.547347859 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:43 crc kubenswrapper[5108]: I1212 14:12:43.179973 5108 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-lcqd6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 14:12:43 crc kubenswrapper[5108]: [-]has-synced failed: reason withheld Dec 12 14:12:43 crc kubenswrapper[5108]: [+]process-running ok Dec 12 14:12:43 crc kubenswrapper[5108]: healthz check failed Dec 12 14:12:43 crc kubenswrapper[5108]: I1212 14:12:43.180057 5108 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-lcqd6" podUID="b84e24d9-6489-40df-a5bc-a2f2e09fcbb7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 14:12:43 crc kubenswrapper[5108]: I1212 14:12:43.240526 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hdk9b\" (UID: \"78c92fa7-6dbe-4fef-8495-6dc6fe162b22\") " pod="openshift-image-registry/image-registry-66587d64c8-hdk9b" Dec 12 14:12:43 crc kubenswrapper[5108]: E1212 14:12:43.240963 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:43.740947025 +0000 UTC m=+116.648938184 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hdk9b" (UID: "78c92fa7-6dbe-4fef-8495-6dc6fe162b22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:43 crc kubenswrapper[5108]: I1212 14:12:43.252414 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-zlssf" podStartSLOduration=96.252397642 podStartE2EDuration="1m36.252397642s" podCreationTimestamp="2025-12-12 14:11:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 14:12:43.25044133 +0000 UTC m=+116.158432499" watchObservedRunningTime="2025-12-12 14:12:43.252397642 +0000 UTC m=+116.160388811" Dec 12 14:12:43 crc kubenswrapper[5108]: I1212 14:12:43.284595 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-cljsg" podStartSLOduration=96.284574216 podStartE2EDuration="1m36.284574216s" podCreationTimestamp="2025-12-12 14:11:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 14:12:43.282451389 +0000 UTC m=+116.190442558" watchObservedRunningTime="2025-12-12 14:12:43.284574216 +0000 UTC m=+116.192565375" Dec 12 14:12:43 crc kubenswrapper[5108]: I1212 14:12:43.315211 5108 ???:1] "http: TLS handshake error from 192.168.126.11:56472: no serving certificate available for the kubelet" Dec 12 14:12:43 crc kubenswrapper[5108]: I1212 14:12:43.341551 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:43 crc kubenswrapper[5108]: E1212 14:12:43.341997 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:43.841975776 +0000 UTC m=+116.749966935 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:43 crc kubenswrapper[5108]: I1212 14:12:43.443034 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hdk9b\" (UID: \"78c92fa7-6dbe-4fef-8495-6dc6fe162b22\") " pod="openshift-image-registry/image-registry-66587d64c8-hdk9b" Dec 12 14:12:43 crc kubenswrapper[5108]: I1212 14:12:43.443418 5108 ???:1] "http: TLS handshake error from 192.168.126.11:56476: no serving certificate available for the kubelet" Dec 12 14:12:43 crc kubenswrapper[5108]: E1212 14:12:43.443487 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:43.94346772 +0000 UTC m=+116.851458919 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hdk9b" (UID: "78c92fa7-6dbe-4fef-8495-6dc6fe162b22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:43 crc kubenswrapper[5108]: I1212 14:12:43.544401 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:43 crc kubenswrapper[5108]: E1212 14:12:43.544777 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:44.044762298 +0000 UTC m=+116.952753457 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:43 crc kubenswrapper[5108]: I1212 14:12:43.646104 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hdk9b\" (UID: \"78c92fa7-6dbe-4fef-8495-6dc6fe162b22\") " pod="openshift-image-registry/image-registry-66587d64c8-hdk9b" Dec 12 14:12:43 crc kubenswrapper[5108]: E1212 14:12:43.646484 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:44.146467928 +0000 UTC m=+117.054459087 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hdk9b" (UID: "78c92fa7-6dbe-4fef-8495-6dc6fe162b22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:43 crc kubenswrapper[5108]: I1212 14:12:43.756225 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:43 crc kubenswrapper[5108]: I1212 14:12:43.756448 5108 ???:1] "http: TLS handshake error from 192.168.126.11:56486: no serving certificate available for the kubelet" Dec 12 14:12:43 crc kubenswrapper[5108]: E1212 14:12:43.756673 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:44.256656065 +0000 UTC m=+117.164647224 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:43 crc kubenswrapper[5108]: I1212 14:12:43.849540 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-l5t96" Dec 12 14:12:43 crc kubenswrapper[5108]: I1212 14:12:43.881379 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hdk9b\" (UID: \"78c92fa7-6dbe-4fef-8495-6dc6fe162b22\") " pod="openshift-image-registry/image-registry-66587d64c8-hdk9b" Dec 12 14:12:43 crc kubenswrapper[5108]: E1212 14:12:43.899339 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:44.399321713 +0000 UTC m=+117.307312872 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hdk9b" (UID: "78c92fa7-6dbe-4fef-8495-6dc6fe162b22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:43 crc kubenswrapper[5108]: I1212 14:12:43.942376 5108 ???:1] "http: TLS handshake error from 192.168.126.11:56502: no serving certificate available for the kubelet" Dec 12 14:12:43 crc kubenswrapper[5108]: I1212 14:12:43.999168 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:43 crc kubenswrapper[5108]: E1212 14:12:43.999297 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:44.499260815 +0000 UTC m=+117.407251974 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:43 crc kubenswrapper[5108]: I1212 14:12:43.999485 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hdk9b\" (UID: \"78c92fa7-6dbe-4fef-8495-6dc6fe162b22\") " pod="openshift-image-registry/image-registry-66587d64c8-hdk9b" Dec 12 14:12:44 crc kubenswrapper[5108]: E1212 14:12:44.002760 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:44.502743338 +0000 UTC m=+117.410734497 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hdk9b" (UID: "78c92fa7-6dbe-4fef-8495-6dc6fe162b22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:44 crc kubenswrapper[5108]: I1212 14:12:44.042555 5108 ???:1] "http: TLS handshake error from 192.168.126.11:56514: no serving certificate available for the kubelet" Dec 12 14:12:44 crc kubenswrapper[5108]: I1212 14:12:44.100917 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:44 crc kubenswrapper[5108]: E1212 14:12:44.101200 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:44.601147719 +0000 UTC m=+117.509138878 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:44 crc kubenswrapper[5108]: I1212 14:12:44.101360 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hdk9b\" (UID: \"78c92fa7-6dbe-4fef-8495-6dc6fe162b22\") " pod="openshift-image-registry/image-registry-66587d64c8-hdk9b" Dec 12 14:12:44 crc kubenswrapper[5108]: E1212 14:12:44.101715 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:44.601697974 +0000 UTC m=+117.509689133 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hdk9b" (UID: "78c92fa7-6dbe-4fef-8495-6dc6fe162b22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:44 crc kubenswrapper[5108]: I1212 14:12:44.166242 5108 ???:1] "http: TLS handshake error from 192.168.126.11:56516: no serving certificate available for the kubelet" Dec 12 14:12:44 crc kubenswrapper[5108]: I1212 14:12:44.182690 5108 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-lcqd6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 14:12:44 crc kubenswrapper[5108]: [-]has-synced failed: reason withheld Dec 12 14:12:44 crc kubenswrapper[5108]: [+]process-running ok Dec 12 14:12:44 crc kubenswrapper[5108]: healthz check failed Dec 12 14:12:44 crc kubenswrapper[5108]: I1212 14:12:44.182809 5108 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-lcqd6" podUID="b84e24d9-6489-40df-a5bc-a2f2e09fcbb7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 14:12:44 crc kubenswrapper[5108]: I1212 14:12:44.202734 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:44 crc kubenswrapper[5108]: E1212 14:12:44.203058 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:44.703032743 +0000 UTC m=+117.611023942 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:44 crc kubenswrapper[5108]: I1212 14:12:44.203271 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hdk9b\" (UID: \"78c92fa7-6dbe-4fef-8495-6dc6fe162b22\") " pod="openshift-image-registry/image-registry-66587d64c8-hdk9b" Dec 12 14:12:44 crc kubenswrapper[5108]: E1212 14:12:44.203725 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:44.703705241 +0000 UTC m=+117.611696400 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hdk9b" (UID: "78c92fa7-6dbe-4fef-8495-6dc6fe162b22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:44 crc kubenswrapper[5108]: I1212 14:12:44.304537 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:44 crc kubenswrapper[5108]: E1212 14:12:44.304740 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:44.804708901 +0000 UTC m=+117.712700060 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:44 crc kubenswrapper[5108]: I1212 14:12:44.304874 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hdk9b\" (UID: \"78c92fa7-6dbe-4fef-8495-6dc6fe162b22\") " pod="openshift-image-registry/image-registry-66587d64c8-hdk9b" Dec 12 14:12:44 crc kubenswrapper[5108]: E1212 14:12:44.305222 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:44.805208964 +0000 UTC m=+117.713200123 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hdk9b" (UID: "78c92fa7-6dbe-4fef-8495-6dc6fe162b22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:44 crc kubenswrapper[5108]: I1212 14:12:44.351005 5108 ???:1] "http: TLS handshake error from 192.168.126.11:56520: no serving certificate available for the kubelet" Dec 12 14:12:44 crc kubenswrapper[5108]: I1212 14:12:44.406251 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:44 crc kubenswrapper[5108]: E1212 14:12:44.406425 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:44.906408051 +0000 UTC m=+117.814399210 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:44 crc kubenswrapper[5108]: I1212 14:12:44.406520 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hdk9b\" (UID: \"78c92fa7-6dbe-4fef-8495-6dc6fe162b22\") " pod="openshift-image-registry/image-registry-66587d64c8-hdk9b" Dec 12 14:12:44 crc kubenswrapper[5108]: E1212 14:12:44.406853 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:44.906842482 +0000 UTC m=+117.814833641 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hdk9b" (UID: "78c92fa7-6dbe-4fef-8495-6dc6fe162b22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:44 crc kubenswrapper[5108]: I1212 14:12:44.507693 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:44 crc kubenswrapper[5108]: E1212 14:12:44.507871 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:45.007828932 +0000 UTC m=+117.915820091 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:44 crc kubenswrapper[5108]: I1212 14:12:44.508162 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hdk9b\" (UID: \"78c92fa7-6dbe-4fef-8495-6dc6fe162b22\") " pod="openshift-image-registry/image-registry-66587d64c8-hdk9b" Dec 12 14:12:44 crc kubenswrapper[5108]: E1212 14:12:44.508511 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:45.00849766 +0000 UTC m=+117.916488889 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hdk9b" (UID: "78c92fa7-6dbe-4fef-8495-6dc6fe162b22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:44 crc kubenswrapper[5108]: I1212 14:12:44.609412 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:44 crc kubenswrapper[5108]: E1212 14:12:44.609751 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:45.109731697 +0000 UTC m=+118.017722866 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:44 crc kubenswrapper[5108]: I1212 14:12:44.618999 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-fg9h4"] Dec 12 14:12:44 crc kubenswrapper[5108]: I1212 14:12:44.634749 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fg9h4" Dec 12 14:12:44 crc kubenswrapper[5108]: I1212 14:12:44.638743 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Dec 12 14:12:44 crc kubenswrapper[5108]: I1212 14:12:44.644778 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fg9h4"] Dec 12 14:12:44 crc kubenswrapper[5108]: I1212 14:12:44.710982 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35559d5b-861e-4f71-b4fe-9cafa147f46b-catalog-content\") pod \"certified-operators-fg9h4\" (UID: \"35559d5b-861e-4f71-b4fe-9cafa147f46b\") " pod="openshift-marketplace/certified-operators-fg9h4" Dec 12 14:12:44 crc kubenswrapper[5108]: I1212 14:12:44.711399 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35559d5b-861e-4f71-b4fe-9cafa147f46b-utilities\") pod \"certified-operators-fg9h4\" (UID: \"35559d5b-861e-4f71-b4fe-9cafa147f46b\") " pod="openshift-marketplace/certified-operators-fg9h4" Dec 12 14:12:44 crc kubenswrapper[5108]: I1212 14:12:44.711491 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hdk9b\" (UID: \"78c92fa7-6dbe-4fef-8495-6dc6fe162b22\") " pod="openshift-image-registry/image-registry-66587d64c8-hdk9b" Dec 12 14:12:44 crc kubenswrapper[5108]: I1212 14:12:44.711528 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77fwk\" (UniqueName: \"kubernetes.io/projected/35559d5b-861e-4f71-b4fe-9cafa147f46b-kube-api-access-77fwk\") pod \"certified-operators-fg9h4\" (UID: \"35559d5b-861e-4f71-b4fe-9cafa147f46b\") " pod="openshift-marketplace/certified-operators-fg9h4" Dec 12 14:12:44 crc kubenswrapper[5108]: E1212 14:12:44.711855 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:45.211840807 +0000 UTC m=+118.119831966 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hdk9b" (UID: "78c92fa7-6dbe-4fef-8495-6dc6fe162b22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:44 crc kubenswrapper[5108]: I1212 14:12:44.746294 5108 ???:1] "http: TLS handshake error from 192.168.126.11:58186: no serving certificate available for the kubelet" Dec 12 14:12:44 crc kubenswrapper[5108]: I1212 14:12:44.813428 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:44 crc kubenswrapper[5108]: E1212 14:12:44.813602 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:45.313576087 +0000 UTC m=+118.221567246 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:44 crc kubenswrapper[5108]: I1212 14:12:44.813846 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35559d5b-861e-4f71-b4fe-9cafa147f46b-utilities\") pod \"certified-operators-fg9h4\" (UID: \"35559d5b-861e-4f71-b4fe-9cafa147f46b\") " pod="openshift-marketplace/certified-operators-fg9h4" Dec 12 14:12:44 crc kubenswrapper[5108]: I1212 14:12:44.813948 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hdk9b\" (UID: \"78c92fa7-6dbe-4fef-8495-6dc6fe162b22\") " pod="openshift-image-registry/image-registry-66587d64c8-hdk9b" Dec 12 14:12:44 crc kubenswrapper[5108]: I1212 14:12:44.813976 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-77fwk\" (UniqueName: \"kubernetes.io/projected/35559d5b-861e-4f71-b4fe-9cafa147f46b-kube-api-access-77fwk\") pod \"certified-operators-fg9h4\" (UID: \"35559d5b-861e-4f71-b4fe-9cafa147f46b\") " pod="openshift-marketplace/certified-operators-fg9h4" Dec 12 14:12:44 crc kubenswrapper[5108]: I1212 14:12:44.814066 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35559d5b-861e-4f71-b4fe-9cafa147f46b-catalog-content\") pod \"certified-operators-fg9h4\" (UID: \"35559d5b-861e-4f71-b4fe-9cafa147f46b\") " pod="openshift-marketplace/certified-operators-fg9h4" Dec 12 14:12:44 crc kubenswrapper[5108]: I1212 14:12:44.814516 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35559d5b-861e-4f71-b4fe-9cafa147f46b-catalog-content\") pod \"certified-operators-fg9h4\" (UID: \"35559d5b-861e-4f71-b4fe-9cafa147f46b\") " pod="openshift-marketplace/certified-operators-fg9h4" Dec 12 14:12:44 crc kubenswrapper[5108]: I1212 14:12:44.814766 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35559d5b-861e-4f71-b4fe-9cafa147f46b-utilities\") pod \"certified-operators-fg9h4\" (UID: \"35559d5b-861e-4f71-b4fe-9cafa147f46b\") " pod="openshift-marketplace/certified-operators-fg9h4" Dec 12 14:12:44 crc kubenswrapper[5108]: E1212 14:12:44.814984 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:45.314977415 +0000 UTC m=+118.222968574 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hdk9b" (UID: "78c92fa7-6dbe-4fef-8495-6dc6fe162b22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:44 crc kubenswrapper[5108]: I1212 14:12:44.823783 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-99vmq"] Dec 12 14:12:44 crc kubenswrapper[5108]: I1212 14:12:44.827950 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-99vmq" Dec 12 14:12:44 crc kubenswrapper[5108]: I1212 14:12:44.832388 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Dec 12 14:12:44 crc kubenswrapper[5108]: I1212 14:12:44.847991 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-gw5g9" event={"ID":"c37b74b9-0534-4df9-9af9-a1c10e3a9b89","Type":"ContainerStarted","Data":"db3d6f7361da1ac350bb9430d22842da6c4c26e3e7a5592d3f103cdd7b63940e"} Dec 12 14:12:44 crc kubenswrapper[5108]: I1212 14:12:44.849978 5108 generic.go:358] "Generic (PLEG): container finished" podID="e8da381c-e396-48dd-a445-40d51d56858d" containerID="f9d546fc4dcd8c79d4d75d65c3cd62ea79ce8a68421b3aac37842de83409c9e9" exitCode=0 Dec 12 14:12:44 crc kubenswrapper[5108]: I1212 14:12:44.850044 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29425800-msrsb" event={"ID":"e8da381c-e396-48dd-a445-40d51d56858d","Type":"ContainerDied","Data":"f9d546fc4dcd8c79d4d75d65c3cd62ea79ce8a68421b3aac37842de83409c9e9"} Dec 12 14:12:44 crc kubenswrapper[5108]: I1212 14:12:44.914916 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-77fwk\" (UniqueName: \"kubernetes.io/projected/35559d5b-861e-4f71-b4fe-9cafa147f46b-kube-api-access-77fwk\") pod \"certified-operators-fg9h4\" (UID: \"35559d5b-861e-4f71-b4fe-9cafa147f46b\") " pod="openshift-marketplace/certified-operators-fg9h4" Dec 12 14:12:44 crc kubenswrapper[5108]: I1212 14:12:44.915483 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:44 crc kubenswrapper[5108]: I1212 14:12:44.915605 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gz7v5\" (UniqueName: \"kubernetes.io/projected/f963a2e4-7bac-4938-ba67-f65a48ac4806-kube-api-access-gz7v5\") pod \"community-operators-99vmq\" (UID: \"f963a2e4-7bac-4938-ba67-f65a48ac4806\") " pod="openshift-marketplace/community-operators-99vmq" Dec 12 14:12:44 crc kubenswrapper[5108]: E1212 14:12:44.915748 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:45.415714647 +0000 UTC m=+118.323705806 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:44 crc kubenswrapper[5108]: I1212 14:12:44.915984 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hdk9b\" (UID: \"78c92fa7-6dbe-4fef-8495-6dc6fe162b22\") " pod="openshift-image-registry/image-registry-66587d64c8-hdk9b" Dec 12 14:12:44 crc kubenswrapper[5108]: I1212 14:12:44.916105 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f963a2e4-7bac-4938-ba67-f65a48ac4806-utilities\") pod \"community-operators-99vmq\" (UID: \"f963a2e4-7bac-4938-ba67-f65a48ac4806\") " pod="openshift-marketplace/community-operators-99vmq" Dec 12 14:12:44 crc kubenswrapper[5108]: I1212 14:12:44.916204 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f963a2e4-7bac-4938-ba67-f65a48ac4806-catalog-content\") pod \"community-operators-99vmq\" (UID: \"f963a2e4-7bac-4938-ba67-f65a48ac4806\") " pod="openshift-marketplace/community-operators-99vmq" Dec 12 14:12:44 crc kubenswrapper[5108]: E1212 14:12:44.916625 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:45.416607502 +0000 UTC m=+118.324598901 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hdk9b" (UID: "78c92fa7-6dbe-4fef-8495-6dc6fe162b22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:44 crc kubenswrapper[5108]: I1212 14:12:44.968482 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fg9h4" Dec 12 14:12:45 crc kubenswrapper[5108]: I1212 14:12:45.023644 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:45 crc kubenswrapper[5108]: E1212 14:12:45.023828 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:45.523802409 +0000 UTC m=+118.431793568 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:45 crc kubenswrapper[5108]: I1212 14:12:45.023931 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gz7v5\" (UniqueName: \"kubernetes.io/projected/f963a2e4-7bac-4938-ba67-f65a48ac4806-kube-api-access-gz7v5\") pod \"community-operators-99vmq\" (UID: \"f963a2e4-7bac-4938-ba67-f65a48ac4806\") " pod="openshift-marketplace/community-operators-99vmq" Dec 12 14:12:45 crc kubenswrapper[5108]: I1212 14:12:45.024093 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hdk9b\" (UID: \"78c92fa7-6dbe-4fef-8495-6dc6fe162b22\") " pod="openshift-image-registry/image-registry-66587d64c8-hdk9b" Dec 12 14:12:45 crc kubenswrapper[5108]: I1212 14:12:45.024144 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f963a2e4-7bac-4938-ba67-f65a48ac4806-utilities\") pod \"community-operators-99vmq\" (UID: \"f963a2e4-7bac-4938-ba67-f65a48ac4806\") " pod="openshift-marketplace/community-operators-99vmq" Dec 12 14:12:45 crc kubenswrapper[5108]: I1212 14:12:45.024200 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f963a2e4-7bac-4938-ba67-f65a48ac4806-catalog-content\") pod \"community-operators-99vmq\" (UID: \"f963a2e4-7bac-4938-ba67-f65a48ac4806\") " pod="openshift-marketplace/community-operators-99vmq" Dec 12 14:12:45 crc kubenswrapper[5108]: I1212 14:12:45.024632 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f963a2e4-7bac-4938-ba67-f65a48ac4806-catalog-content\") pod \"community-operators-99vmq\" (UID: \"f963a2e4-7bac-4938-ba67-f65a48ac4806\") " pod="openshift-marketplace/community-operators-99vmq" Dec 12 14:12:45 crc kubenswrapper[5108]: E1212 14:12:45.025140 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:45.525131354 +0000 UTC m=+118.433122513 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hdk9b" (UID: "78c92fa7-6dbe-4fef-8495-6dc6fe162b22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:45 crc kubenswrapper[5108]: I1212 14:12:45.025453 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f963a2e4-7bac-4938-ba67-f65a48ac4806-utilities\") pod \"community-operators-99vmq\" (UID: \"f963a2e4-7bac-4938-ba67-f65a48ac4806\") " pod="openshift-marketplace/community-operators-99vmq" Dec 12 14:12:45 crc kubenswrapper[5108]: I1212 14:12:45.130675 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:45 crc kubenswrapper[5108]: E1212 14:12:45.131262 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:45.631243961 +0000 UTC m=+118.539235120 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:45 crc kubenswrapper[5108]: I1212 14:12:45.188759 5108 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-lcqd6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 14:12:45 crc kubenswrapper[5108]: [-]has-synced failed: reason withheld Dec 12 14:12:45 crc kubenswrapper[5108]: [+]process-running ok Dec 12 14:12:45 crc kubenswrapper[5108]: healthz check failed Dec 12 14:12:45 crc kubenswrapper[5108]: I1212 14:12:45.188840 5108 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-lcqd6" podUID="b84e24d9-6489-40df-a5bc-a2f2e09fcbb7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 14:12:45 crc kubenswrapper[5108]: I1212 14:12:45.244879 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hdk9b\" (UID: \"78c92fa7-6dbe-4fef-8495-6dc6fe162b22\") " pod="openshift-image-registry/image-registry-66587d64c8-hdk9b" Dec 12 14:12:45 crc kubenswrapper[5108]: E1212 14:12:45.245290 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:45.745277552 +0000 UTC m=+118.653268711 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hdk9b" (UID: "78c92fa7-6dbe-4fef-8495-6dc6fe162b22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:45 crc kubenswrapper[5108]: I1212 14:12:45.313688 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gz7v5\" (UniqueName: \"kubernetes.io/projected/f963a2e4-7bac-4938-ba67-f65a48ac4806-kube-api-access-gz7v5\") pod \"community-operators-99vmq\" (UID: \"f963a2e4-7bac-4938-ba67-f65a48ac4806\") " pod="openshift-marketplace/community-operators-99vmq" Dec 12 14:12:45 crc kubenswrapper[5108]: I1212 14:12:45.346357 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:45 crc kubenswrapper[5108]: E1212 14:12:45.346800 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:45.846778085 +0000 UTC m=+118.754769254 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:45 crc kubenswrapper[5108]: I1212 14:12:45.350721 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-99vmq"] Dec 12 14:12:45 crc kubenswrapper[5108]: I1212 14:12:45.395740 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-pdqkf"] Dec 12 14:12:45 crc kubenswrapper[5108]: I1212 14:12:45.404266 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pdqkf" Dec 12 14:12:45 crc kubenswrapper[5108]: I1212 14:12:45.448219 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hdk9b\" (UID: \"78c92fa7-6dbe-4fef-8495-6dc6fe162b22\") " pod="openshift-image-registry/image-registry-66587d64c8-hdk9b" Dec 12 14:12:45 crc kubenswrapper[5108]: I1212 14:12:45.448303 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4273b638-c6db-4a97-bbf1-e7390f6b555a-utilities\") pod \"certified-operators-pdqkf\" (UID: \"4273b638-c6db-4a97-bbf1-e7390f6b555a\") " pod="openshift-marketplace/certified-operators-pdqkf" Dec 12 14:12:45 crc kubenswrapper[5108]: I1212 14:12:45.448416 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdddn\" (UniqueName: \"kubernetes.io/projected/4273b638-c6db-4a97-bbf1-e7390f6b555a-kube-api-access-cdddn\") pod \"certified-operators-pdqkf\" (UID: \"4273b638-c6db-4a97-bbf1-e7390f6b555a\") " pod="openshift-marketplace/certified-operators-pdqkf" Dec 12 14:12:45 crc kubenswrapper[5108]: I1212 14:12:45.448531 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4273b638-c6db-4a97-bbf1-e7390f6b555a-catalog-content\") pod \"certified-operators-pdqkf\" (UID: \"4273b638-c6db-4a97-bbf1-e7390f6b555a\") " pod="openshift-marketplace/certified-operators-pdqkf" Dec 12 14:12:45 crc kubenswrapper[5108]: E1212 14:12:45.448967 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:45.948935766 +0000 UTC m=+118.856926925 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hdk9b" (UID: "78c92fa7-6dbe-4fef-8495-6dc6fe162b22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:45 crc kubenswrapper[5108]: I1212 14:12:45.498668 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pdqkf"] Dec 12 14:12:45 crc kubenswrapper[5108]: I1212 14:12:45.501777 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-zp6xr"] Dec 12 14:12:45 crc kubenswrapper[5108]: I1212 14:12:45.514704 5108 ???:1] "http: TLS handshake error from 192.168.126.11:58190: no serving certificate available for the kubelet" Dec 12 14:12:45 crc kubenswrapper[5108]: I1212 14:12:45.544332 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zp6xr" Dec 12 14:12:45 crc kubenswrapper[5108]: I1212 14:12:45.550284 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:45 crc kubenswrapper[5108]: I1212 14:12:45.550701 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4273b638-c6db-4a97-bbf1-e7390f6b555a-utilities\") pod \"certified-operators-pdqkf\" (UID: \"4273b638-c6db-4a97-bbf1-e7390f6b555a\") " pod="openshift-marketplace/certified-operators-pdqkf" Dec 12 14:12:45 crc kubenswrapper[5108]: E1212 14:12:45.551257 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:46.051198851 +0000 UTC m=+118.959190010 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:45 crc kubenswrapper[5108]: I1212 14:12:45.551331 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-cdddn\" (UniqueName: \"kubernetes.io/projected/4273b638-c6db-4a97-bbf1-e7390f6b555a-kube-api-access-cdddn\") pod \"certified-operators-pdqkf\" (UID: \"4273b638-c6db-4a97-bbf1-e7390f6b555a\") " pod="openshift-marketplace/certified-operators-pdqkf" Dec 12 14:12:45 crc kubenswrapper[5108]: I1212 14:12:45.551826 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4273b638-c6db-4a97-bbf1-e7390f6b555a-utilities\") pod \"certified-operators-pdqkf\" (UID: \"4273b638-c6db-4a97-bbf1-e7390f6b555a\") " pod="openshift-marketplace/certified-operators-pdqkf" Dec 12 14:12:45 crc kubenswrapper[5108]: I1212 14:12:45.552502 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4273b638-c6db-4a97-bbf1-e7390f6b555a-catalog-content\") pod \"certified-operators-pdqkf\" (UID: \"4273b638-c6db-4a97-bbf1-e7390f6b555a\") " pod="openshift-marketplace/certified-operators-pdqkf" Dec 12 14:12:45 crc kubenswrapper[5108]: I1212 14:12:45.552637 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hdk9b\" (UID: \"78c92fa7-6dbe-4fef-8495-6dc6fe162b22\") " pod="openshift-image-registry/image-registry-66587d64c8-hdk9b" Dec 12 14:12:45 crc kubenswrapper[5108]: E1212 14:12:45.553143 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:46.053122473 +0000 UTC m=+118.961113632 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hdk9b" (UID: "78c92fa7-6dbe-4fef-8495-6dc6fe162b22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:45 crc kubenswrapper[5108]: I1212 14:12:45.553730 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4273b638-c6db-4a97-bbf1-e7390f6b555a-catalog-content\") pod \"certified-operators-pdqkf\" (UID: \"4273b638-c6db-4a97-bbf1-e7390f6b555a\") " pod="openshift-marketplace/certified-operators-pdqkf" Dec 12 14:12:45 crc kubenswrapper[5108]: I1212 14:12:45.565330 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zp6xr"] Dec 12 14:12:45 crc kubenswrapper[5108]: I1212 14:12:45.602462 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-99vmq" Dec 12 14:12:45 crc kubenswrapper[5108]: I1212 14:12:45.653361 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdddn\" (UniqueName: \"kubernetes.io/projected/4273b638-c6db-4a97-bbf1-e7390f6b555a-kube-api-access-cdddn\") pod \"certified-operators-pdqkf\" (UID: \"4273b638-c6db-4a97-bbf1-e7390f6b555a\") " pod="openshift-marketplace/certified-operators-pdqkf" Dec 12 14:12:45 crc kubenswrapper[5108]: I1212 14:12:45.653600 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:45 crc kubenswrapper[5108]: I1212 14:12:45.653891 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9418022-cc61-44ab-99a4-4afbf84fad60-catalog-content\") pod \"community-operators-zp6xr\" (UID: \"d9418022-cc61-44ab-99a4-4afbf84fad60\") " pod="openshift-marketplace/community-operators-zp6xr" Dec 12 14:12:45 crc kubenswrapper[5108]: I1212 14:12:45.653965 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bxlk\" (UniqueName: \"kubernetes.io/projected/d9418022-cc61-44ab-99a4-4afbf84fad60-kube-api-access-4bxlk\") pod \"community-operators-zp6xr\" (UID: \"d9418022-cc61-44ab-99a4-4afbf84fad60\") " pod="openshift-marketplace/community-operators-zp6xr" Dec 12 14:12:45 crc kubenswrapper[5108]: E1212 14:12:45.654025 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:46.1540014 +0000 UTC m=+119.061992569 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:45 crc kubenswrapper[5108]: I1212 14:12:45.654180 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9418022-cc61-44ab-99a4-4afbf84fad60-utilities\") pod \"community-operators-zp6xr\" (UID: \"d9418022-cc61-44ab-99a4-4afbf84fad60\") " pod="openshift-marketplace/community-operators-zp6xr" Dec 12 14:12:45 crc kubenswrapper[5108]: I1212 14:12:45.654288 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hdk9b\" (UID: \"78c92fa7-6dbe-4fef-8495-6dc6fe162b22\") " pod="openshift-image-registry/image-registry-66587d64c8-hdk9b" Dec 12 14:12:45 crc kubenswrapper[5108]: E1212 14:12:45.654621 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:46.154611046 +0000 UTC m=+119.062602205 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hdk9b" (UID: "78c92fa7-6dbe-4fef-8495-6dc6fe162b22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:45 crc kubenswrapper[5108]: I1212 14:12:45.732902 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pdqkf" Dec 12 14:12:45 crc kubenswrapper[5108]: I1212 14:12:45.757849 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:45 crc kubenswrapper[5108]: I1212 14:12:45.758123 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9418022-cc61-44ab-99a4-4afbf84fad60-utilities\") pod \"community-operators-zp6xr\" (UID: \"d9418022-cc61-44ab-99a4-4afbf84fad60\") " pod="openshift-marketplace/community-operators-zp6xr" Dec 12 14:12:45 crc kubenswrapper[5108]: I1212 14:12:45.758217 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9418022-cc61-44ab-99a4-4afbf84fad60-catalog-content\") pod \"community-operators-zp6xr\" (UID: \"d9418022-cc61-44ab-99a4-4afbf84fad60\") " pod="openshift-marketplace/community-operators-zp6xr" Dec 12 14:12:45 crc kubenswrapper[5108]: I1212 14:12:45.758744 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9418022-cc61-44ab-99a4-4afbf84fad60-utilities\") pod \"community-operators-zp6xr\" (UID: \"d9418022-cc61-44ab-99a4-4afbf84fad60\") " pod="openshift-marketplace/community-operators-zp6xr" Dec 12 14:12:45 crc kubenswrapper[5108]: I1212 14:12:45.760535 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9418022-cc61-44ab-99a4-4afbf84fad60-catalog-content\") pod \"community-operators-zp6xr\" (UID: \"d9418022-cc61-44ab-99a4-4afbf84fad60\") " pod="openshift-marketplace/community-operators-zp6xr" Dec 12 14:12:45 crc kubenswrapper[5108]: E1212 14:12:45.779530 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:46.279479486 +0000 UTC m=+119.187470645 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:45 crc kubenswrapper[5108]: I1212 14:12:45.779686 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4bxlk\" (UniqueName: \"kubernetes.io/projected/d9418022-cc61-44ab-99a4-4afbf84fad60-kube-api-access-4bxlk\") pod \"community-operators-zp6xr\" (UID: \"d9418022-cc61-44ab-99a4-4afbf84fad60\") " pod="openshift-marketplace/community-operators-zp6xr" Dec 12 14:12:45 crc kubenswrapper[5108]: I1212 14:12:45.789529 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-64d44f6ddf-np6kd" Dec 12 14:12:45 crc kubenswrapper[5108]: I1212 14:12:45.789622 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/console-64d44f6ddf-np6kd" Dec 12 14:12:45 crc kubenswrapper[5108]: I1212 14:12:45.798190 5108 patch_prober.go:28] interesting pod/console-64d44f6ddf-np6kd container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.7:8443/health\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Dec 12 14:12:45 crc kubenswrapper[5108]: I1212 14:12:45.798258 5108 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-console/console-64d44f6ddf-np6kd" podUID="d2d38bed-cc7f-4c81-a918-78814a48a49f" containerName="console" probeResult="failure" output="Get \"https://10.217.0.7:8443/health\": dial tcp 10.217.0.7:8443: connect: connection refused" Dec 12 14:12:45 crc kubenswrapper[5108]: I1212 14:12:45.807782 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4bxlk\" (UniqueName: \"kubernetes.io/projected/d9418022-cc61-44ab-99a4-4afbf84fad60-kube-api-access-4bxlk\") pod \"community-operators-zp6xr\" (UID: \"d9418022-cc61-44ab-99a4-4afbf84fad60\") " pod="openshift-marketplace/community-operators-zp6xr" Dec 12 14:12:45 crc kubenswrapper[5108]: I1212 14:12:45.868541 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fg9h4"] Dec 12 14:12:45 crc kubenswrapper[5108]: I1212 14:12:45.881361 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hdk9b\" (UID: \"78c92fa7-6dbe-4fef-8495-6dc6fe162b22\") " pod="openshift-image-registry/image-registry-66587d64c8-hdk9b" Dec 12 14:12:45 crc kubenswrapper[5108]: E1212 14:12:45.882906 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:46.382891051 +0000 UTC m=+119.290882210 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hdk9b" (UID: "78c92fa7-6dbe-4fef-8495-6dc6fe162b22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:45 crc kubenswrapper[5108]: I1212 14:12:45.909494 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zp6xr" Dec 12 14:12:45 crc kubenswrapper[5108]: I1212 14:12:45.986335 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:45 crc kubenswrapper[5108]: E1212 14:12:45.986529 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:46.486496132 +0000 UTC m=+119.394487301 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:45 crc kubenswrapper[5108]: I1212 14:12:45.986920 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hdk9b\" (UID: \"78c92fa7-6dbe-4fef-8495-6dc6fe162b22\") " pod="openshift-image-registry/image-registry-66587d64c8-hdk9b" Dec 12 14:12:45 crc kubenswrapper[5108]: E1212 14:12:45.987490 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:46.487480528 +0000 UTC m=+119.395471697 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hdk9b" (UID: "78c92fa7-6dbe-4fef-8495-6dc6fe162b22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:46 crc kubenswrapper[5108]: I1212 14:12:46.078740 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-9ddfb9f55-5k7p6" Dec 12 14:12:46 crc kubenswrapper[5108]: I1212 14:12:46.091623 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:46 crc kubenswrapper[5108]: E1212 14:12:46.092062 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:46.592042855 +0000 UTC m=+119.500034014 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:46 crc kubenswrapper[5108]: I1212 14:12:46.095638 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-99vmq"] Dec 12 14:12:46 crc kubenswrapper[5108]: I1212 14:12:46.095840 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-9ddfb9f55-5k7p6" Dec 12 14:12:46 crc kubenswrapper[5108]: I1212 14:12:46.182549 5108 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-lcqd6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 14:12:46 crc kubenswrapper[5108]: [-]has-synced failed: reason withheld Dec 12 14:12:46 crc kubenswrapper[5108]: [+]process-running ok Dec 12 14:12:46 crc kubenswrapper[5108]: healthz check failed Dec 12 14:12:46 crc kubenswrapper[5108]: I1212 14:12:46.182861 5108 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-lcqd6" podUID="b84e24d9-6489-40df-a5bc-a2f2e09fcbb7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 14:12:46 crc kubenswrapper[5108]: I1212 14:12:46.197134 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hdk9b\" (UID: \"78c92fa7-6dbe-4fef-8495-6dc6fe162b22\") " pod="openshift-image-registry/image-registry-66587d64c8-hdk9b" Dec 12 14:12:46 crc kubenswrapper[5108]: E1212 14:12:46.197419 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:46.697408352 +0000 UTC m=+119.605399511 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hdk9b" (UID: "78c92fa7-6dbe-4fef-8495-6dc6fe162b22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:46 crc kubenswrapper[5108]: I1212 14:12:46.298000 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:46 crc kubenswrapper[5108]: E1212 14:12:46.298437 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:46.798410992 +0000 UTC m=+119.706402151 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:46 crc kubenswrapper[5108]: I1212 14:12:46.298697 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hdk9b\" (UID: \"78c92fa7-6dbe-4fef-8495-6dc6fe162b22\") " pod="openshift-image-registry/image-registry-66587d64c8-hdk9b" Dec 12 14:12:46 crc kubenswrapper[5108]: E1212 14:12:46.299069 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:46.799062529 +0000 UTC m=+119.707053688 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hdk9b" (UID: "78c92fa7-6dbe-4fef-8495-6dc6fe162b22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:46 crc kubenswrapper[5108]: I1212 14:12:46.301210 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pdqkf"] Dec 12 14:12:46 crc kubenswrapper[5108]: I1212 14:12:46.316514 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29425800-msrsb" Dec 12 14:12:46 crc kubenswrapper[5108]: I1212 14:12:46.370953 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zp6xr"] Dec 12 14:12:46 crc kubenswrapper[5108]: I1212 14:12:46.392501 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-ddbtk"] Dec 12 14:12:46 crc kubenswrapper[5108]: I1212 14:12:46.393292 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e8da381c-e396-48dd-a445-40d51d56858d" containerName="collect-profiles" Dec 12 14:12:46 crc kubenswrapper[5108]: I1212 14:12:46.393314 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8da381c-e396-48dd-a445-40d51d56858d" containerName="collect-profiles" Dec 12 14:12:46 crc kubenswrapper[5108]: I1212 14:12:46.393421 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="e8da381c-e396-48dd-a445-40d51d56858d" containerName="collect-profiles" Dec 12 14:12:46 crc kubenswrapper[5108]: W1212 14:12:46.396390 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd9418022_cc61_44ab_99a4_4afbf84fad60.slice/crio-711d77d1affee574ef330cb08e0e5686c9677197d5b2c4d83d1f2621d35c9663 WatchSource:0}: Error finding container 711d77d1affee574ef330cb08e0e5686c9677197d5b2c4d83d1f2621d35c9663: Status 404 returned error can't find the container with id 711d77d1affee574ef330cb08e0e5686c9677197d5b2c4d83d1f2621d35c9663 Dec 12 14:12:46 crc kubenswrapper[5108]: I1212 14:12:46.402849 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e8da381c-e396-48dd-a445-40d51d56858d-secret-volume\") pod \"e8da381c-e396-48dd-a445-40d51d56858d\" (UID: \"e8da381c-e396-48dd-a445-40d51d56858d\") " Dec 12 14:12:46 crc kubenswrapper[5108]: I1212 14:12:46.402992 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:46 crc kubenswrapper[5108]: I1212 14:12:46.403014 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zdwf2\" (UniqueName: \"kubernetes.io/projected/e8da381c-e396-48dd-a445-40d51d56858d-kube-api-access-zdwf2\") pod \"e8da381c-e396-48dd-a445-40d51d56858d\" (UID: \"e8da381c-e396-48dd-a445-40d51d56858d\") " Dec 12 14:12:46 crc kubenswrapper[5108]: I1212 14:12:46.403109 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e8da381c-e396-48dd-a445-40d51d56858d-config-volume\") pod \"e8da381c-e396-48dd-a445-40d51d56858d\" (UID: \"e8da381c-e396-48dd-a445-40d51d56858d\") " Dec 12 14:12:46 crc kubenswrapper[5108]: I1212 14:12:46.404020 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8da381c-e396-48dd-a445-40d51d56858d-config-volume" (OuterVolumeSpecName: "config-volume") pod "e8da381c-e396-48dd-a445-40d51d56858d" (UID: "e8da381c-e396-48dd-a445-40d51d56858d"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:12:46 crc kubenswrapper[5108]: E1212 14:12:46.404924 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:46.904892159 +0000 UTC m=+119.812883328 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:46 crc kubenswrapper[5108]: I1212 14:12:46.406058 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ddbtk" Dec 12 14:12:46 crc kubenswrapper[5108]: I1212 14:12:46.408644 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Dec 12 14:12:46 crc kubenswrapper[5108]: I1212 14:12:46.421364 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8da381c-e396-48dd-a445-40d51d56858d-kube-api-access-zdwf2" (OuterVolumeSpecName: "kube-api-access-zdwf2") pod "e8da381c-e396-48dd-a445-40d51d56858d" (UID: "e8da381c-e396-48dd-a445-40d51d56858d"). InnerVolumeSpecName "kube-api-access-zdwf2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:12:46 crc kubenswrapper[5108]: I1212 14:12:46.422266 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8da381c-e396-48dd-a445-40d51d56858d-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "e8da381c-e396-48dd-a445-40d51d56858d" (UID: "e8da381c-e396-48dd-a445-40d51d56858d"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:12:46 crc kubenswrapper[5108]: I1212 14:12:46.440413 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ddbtk"] Dec 12 14:12:46 crc kubenswrapper[5108]: I1212 14:12:46.466653 5108 patch_prober.go:28] interesting pod/downloads-747b44746d-hrj8v container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.30:8080/\": dial tcp 10.217.0.30:8080: connect: connection refused" start-of-body= Dec 12 14:12:46 crc kubenswrapper[5108]: I1212 14:12:46.466706 5108 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-hrj8v" podUID="29c8dcea-f999-4e33-9f5e-ef9eb8a423f7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.30:8080/\": dial tcp 10.217.0.30:8080: connect: connection refused" Dec 12 14:12:46 crc kubenswrapper[5108]: I1212 14:12:46.504807 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d7df9af-f2f2-4ff9-a6be-f3b20aa4f91b-catalog-content\") pod \"redhat-marketplace-ddbtk\" (UID: \"9d7df9af-f2f2-4ff9-a6be-f3b20aa4f91b\") " pod="openshift-marketplace/redhat-marketplace-ddbtk" Dec 12 14:12:46 crc kubenswrapper[5108]: I1212 14:12:46.504859 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmgvh\" (UniqueName: \"kubernetes.io/projected/9d7df9af-f2f2-4ff9-a6be-f3b20aa4f91b-kube-api-access-bmgvh\") pod \"redhat-marketplace-ddbtk\" (UID: \"9d7df9af-f2f2-4ff9-a6be-f3b20aa4f91b\") " pod="openshift-marketplace/redhat-marketplace-ddbtk" Dec 12 14:12:46 crc kubenswrapper[5108]: I1212 14:12:46.504889 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hdk9b\" (UID: \"78c92fa7-6dbe-4fef-8495-6dc6fe162b22\") " pod="openshift-image-registry/image-registry-66587d64c8-hdk9b" Dec 12 14:12:46 crc kubenswrapper[5108]: I1212 14:12:46.504923 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d7df9af-f2f2-4ff9-a6be-f3b20aa4f91b-utilities\") pod \"redhat-marketplace-ddbtk\" (UID: \"9d7df9af-f2f2-4ff9-a6be-f3b20aa4f91b\") " pod="openshift-marketplace/redhat-marketplace-ddbtk" Dec 12 14:12:46 crc kubenswrapper[5108]: I1212 14:12:46.505033 5108 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e8da381c-e396-48dd-a445-40d51d56858d-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:46 crc kubenswrapper[5108]: I1212 14:12:46.505044 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zdwf2\" (UniqueName: \"kubernetes.io/projected/e8da381c-e396-48dd-a445-40d51d56858d-kube-api-access-zdwf2\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:46 crc kubenswrapper[5108]: I1212 14:12:46.505053 5108 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e8da381c-e396-48dd-a445-40d51d56858d-config-volume\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:46 crc kubenswrapper[5108]: E1212 14:12:46.505338 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:47.005323935 +0000 UTC m=+119.913315094 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hdk9b" (UID: "78c92fa7-6dbe-4fef-8495-6dc6fe162b22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:46 crc kubenswrapper[5108]: I1212 14:12:46.606598 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:46 crc kubenswrapper[5108]: I1212 14:12:46.607095 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d7df9af-f2f2-4ff9-a6be-f3b20aa4f91b-catalog-content\") pod \"redhat-marketplace-ddbtk\" (UID: \"9d7df9af-f2f2-4ff9-a6be-f3b20aa4f91b\") " pod="openshift-marketplace/redhat-marketplace-ddbtk" Dec 12 14:12:46 crc kubenswrapper[5108]: I1212 14:12:46.607168 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bmgvh\" (UniqueName: \"kubernetes.io/projected/9d7df9af-f2f2-4ff9-a6be-f3b20aa4f91b-kube-api-access-bmgvh\") pod \"redhat-marketplace-ddbtk\" (UID: \"9d7df9af-f2f2-4ff9-a6be-f3b20aa4f91b\") " pod="openshift-marketplace/redhat-marketplace-ddbtk" Dec 12 14:12:46 crc kubenswrapper[5108]: I1212 14:12:46.607255 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d7df9af-f2f2-4ff9-a6be-f3b20aa4f91b-utilities\") pod \"redhat-marketplace-ddbtk\" (UID: \"9d7df9af-f2f2-4ff9-a6be-f3b20aa4f91b\") " pod="openshift-marketplace/redhat-marketplace-ddbtk" Dec 12 14:12:46 crc kubenswrapper[5108]: I1212 14:12:46.607865 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d7df9af-f2f2-4ff9-a6be-f3b20aa4f91b-utilities\") pod \"redhat-marketplace-ddbtk\" (UID: \"9d7df9af-f2f2-4ff9-a6be-f3b20aa4f91b\") " pod="openshift-marketplace/redhat-marketplace-ddbtk" Dec 12 14:12:46 crc kubenswrapper[5108]: I1212 14:12:46.608311 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d7df9af-f2f2-4ff9-a6be-f3b20aa4f91b-catalog-content\") pod \"redhat-marketplace-ddbtk\" (UID: \"9d7df9af-f2f2-4ff9-a6be-f3b20aa4f91b\") " pod="openshift-marketplace/redhat-marketplace-ddbtk" Dec 12 14:12:46 crc kubenswrapper[5108]: E1212 14:12:46.607979 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:47.107955519 +0000 UTC m=+120.015946688 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:46 crc kubenswrapper[5108]: I1212 14:12:46.638326 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmgvh\" (UniqueName: \"kubernetes.io/projected/9d7df9af-f2f2-4ff9-a6be-f3b20aa4f91b-kube-api-access-bmgvh\") pod \"redhat-marketplace-ddbtk\" (UID: \"9d7df9af-f2f2-4ff9-a6be-f3b20aa4f91b\") " pod="openshift-marketplace/redhat-marketplace-ddbtk" Dec 12 14:12:46 crc kubenswrapper[5108]: I1212 14:12:46.708867 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hdk9b\" (UID: \"78c92fa7-6dbe-4fef-8495-6dc6fe162b22\") " pod="openshift-image-registry/image-registry-66587d64c8-hdk9b" Dec 12 14:12:46 crc kubenswrapper[5108]: E1212 14:12:46.709191 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:47.209178935 +0000 UTC m=+120.117170094 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hdk9b" (UID: "78c92fa7-6dbe-4fef-8495-6dc6fe162b22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:46 crc kubenswrapper[5108]: I1212 14:12:46.741250 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ddbtk" Dec 12 14:12:46 crc kubenswrapper[5108]: I1212 14:12:46.795996 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-ljwkh"] Dec 12 14:12:46 crc kubenswrapper[5108]: I1212 14:12:46.810106 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:46 crc kubenswrapper[5108]: E1212 14:12:46.810499 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:47.310479174 +0000 UTC m=+120.218470353 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:46 crc kubenswrapper[5108]: I1212 14:12:46.886582 5108 ???:1] "http: TLS handshake error from 192.168.126.11:58192: no serving certificate available for the kubelet" Dec 12 14:12:46 crc kubenswrapper[5108]: I1212 14:12:46.898339 5108 generic.go:358] "Generic (PLEG): container finished" podID="35559d5b-861e-4f71-b4fe-9cafa147f46b" containerID="1b9bd7ff7ac5a758b677f327df4177219d2578d07dbf2958fd567532221ebf37" exitCode=0 Dec 12 14:12:46 crc kubenswrapper[5108]: I1212 14:12:46.905396 5108 generic.go:358] "Generic (PLEG): container finished" podID="4273b638-c6db-4a97-bbf1-e7390f6b555a" containerID="9c384c4e7101f4eecd430ba00727e53a6e090c49ca476ad68cb270a2efda381c" exitCode=0 Dec 12 14:12:46 crc kubenswrapper[5108]: I1212 14:12:46.908007 5108 generic.go:358] "Generic (PLEG): container finished" podID="f963a2e4-7bac-4938-ba67-f65a48ac4806" containerID="ba6fc236d3cc2a58b4364567041ba1cbb3ac5ab922093e51c36518e5e2a86e7d" exitCode=0 Dec 12 14:12:46 crc kubenswrapper[5108]: I1212 14:12:46.911998 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hdk9b\" (UID: \"78c92fa7-6dbe-4fef-8495-6dc6fe162b22\") " pod="openshift-image-registry/image-registry-66587d64c8-hdk9b" Dec 12 14:12:46 crc kubenswrapper[5108]: E1212 14:12:46.912439 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:47.41242524 +0000 UTC m=+120.320416399 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hdk9b" (UID: "78c92fa7-6dbe-4fef-8495-6dc6fe162b22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:47 crc kubenswrapper[5108]: I1212 14:12:47.013665 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:47 crc kubenswrapper[5108]: E1212 14:12:47.014028 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:47.514013045 +0000 UTC m=+120.422004204 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:47 crc kubenswrapper[5108]: I1212 14:12:47.115110 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hdk9b\" (UID: \"78c92fa7-6dbe-4fef-8495-6dc6fe162b22\") " pod="openshift-image-registry/image-registry-66587d64c8-hdk9b" Dec 12 14:12:47 crc kubenswrapper[5108]: E1212 14:12:47.115482 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:47.615464488 +0000 UTC m=+120.523455637 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hdk9b" (UID: "78c92fa7-6dbe-4fef-8495-6dc6fe162b22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:47 crc kubenswrapper[5108]: I1212 14:12:47.180039 5108 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-lcqd6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 14:12:47 crc kubenswrapper[5108]: [-]has-synced failed: reason withheld Dec 12 14:12:47 crc kubenswrapper[5108]: [+]process-running ok Dec 12 14:12:47 crc kubenswrapper[5108]: healthz check failed Dec 12 14:12:47 crc kubenswrapper[5108]: I1212 14:12:47.180146 5108 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-lcqd6" podUID="b84e24d9-6489-40df-a5bc-a2f2e09fcbb7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 14:12:47 crc kubenswrapper[5108]: I1212 14:12:47.201418 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fg9h4" event={"ID":"35559d5b-861e-4f71-b4fe-9cafa147f46b","Type":"ContainerDied","Data":"1b9bd7ff7ac5a758b677f327df4177219d2578d07dbf2958fd567532221ebf37"} Dec 12 14:12:47 crc kubenswrapper[5108]: I1212 14:12:47.201472 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ljwkh"] Dec 12 14:12:47 crc kubenswrapper[5108]: I1212 14:12:47.201498 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ingress/router-default-68cf44c8b8-lcqd6" Dec 12 14:12:47 crc kubenswrapper[5108]: I1212 14:12:47.201508 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fg9h4" event={"ID":"35559d5b-861e-4f71-b4fe-9cafa147f46b","Type":"ContainerStarted","Data":"3fc4549efc55c994b44aa22e5d065827a82f871624e8c12c772b1e1b7fd860fa"} Dec 12 14:12:47 crc kubenswrapper[5108]: I1212 14:12:47.201517 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29425800-msrsb" event={"ID":"e8da381c-e396-48dd-a445-40d51d56858d","Type":"ContainerDied","Data":"036138a9cbc6a11510c97c5566588a062e7885407d64621ab351105054843c4c"} Dec 12 14:12:47 crc kubenswrapper[5108]: I1212 14:12:47.201522 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29425800-msrsb" Dec 12 14:12:47 crc kubenswrapper[5108]: I1212 14:12:47.201529 5108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="036138a9cbc6a11510c97c5566588a062e7885407d64621ab351105054843c4c" Dec 12 14:12:47 crc kubenswrapper[5108]: I1212 14:12:47.201712 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ddbtk"] Dec 12 14:12:47 crc kubenswrapper[5108]: I1212 14:12:47.201756 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zp6xr" event={"ID":"d9418022-cc61-44ab-99a4-4afbf84fad60","Type":"ContainerStarted","Data":"711d77d1affee574ef330cb08e0e5686c9677197d5b2c4d83d1f2621d35c9663"} Dec 12 14:12:47 crc kubenswrapper[5108]: I1212 14:12:47.201797 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pdqkf" event={"ID":"4273b638-c6db-4a97-bbf1-e7390f6b555a","Type":"ContainerDied","Data":"9c384c4e7101f4eecd430ba00727e53a6e090c49ca476ad68cb270a2efda381c"} Dec 12 14:12:47 crc kubenswrapper[5108]: I1212 14:12:47.201801 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ljwkh" Dec 12 14:12:47 crc kubenswrapper[5108]: I1212 14:12:47.201823 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pdqkf" event={"ID":"4273b638-c6db-4a97-bbf1-e7390f6b555a","Type":"ContainerStarted","Data":"29415488d986a8f8c40f371aebead671793e3898e55f80b55454eac3a734d690"} Dec 12 14:12:47 crc kubenswrapper[5108]: I1212 14:12:47.202446 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-99vmq" event={"ID":"f963a2e4-7bac-4938-ba67-f65a48ac4806","Type":"ContainerDied","Data":"ba6fc236d3cc2a58b4364567041ba1cbb3ac5ab922093e51c36518e5e2a86e7d"} Dec 12 14:12:47 crc kubenswrapper[5108]: I1212 14:12:47.202470 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-99vmq" event={"ID":"f963a2e4-7bac-4938-ba67-f65a48ac4806","Type":"ContainerStarted","Data":"01d56284ae79f428e9481b467b2bc25d65b4e72dcc7e5e8f266c5823c6f973b7"} Dec 12 14:12:47 crc kubenswrapper[5108]: I1212 14:12:47.216724 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:47 crc kubenswrapper[5108]: E1212 14:12:47.217113 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:47.717052273 +0000 UTC m=+120.625043432 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:47 crc kubenswrapper[5108]: I1212 14:12:47.318798 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hdk9b\" (UID: \"78c92fa7-6dbe-4fef-8495-6dc6fe162b22\") " pod="openshift-image-registry/image-registry-66587d64c8-hdk9b" Dec 12 14:12:47 crc kubenswrapper[5108]: I1212 14:12:47.318869 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81f2fcb3-b42d-4bb0-9cb0-17ca67aaa401-utilities\") pod \"redhat-marketplace-ljwkh\" (UID: \"81f2fcb3-b42d-4bb0-9cb0-17ca67aaa401\") " pod="openshift-marketplace/redhat-marketplace-ljwkh" Dec 12 14:12:47 crc kubenswrapper[5108]: I1212 14:12:47.319219 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81f2fcb3-b42d-4bb0-9cb0-17ca67aaa401-catalog-content\") pod \"redhat-marketplace-ljwkh\" (UID: \"81f2fcb3-b42d-4bb0-9cb0-17ca67aaa401\") " pod="openshift-marketplace/redhat-marketplace-ljwkh" Dec 12 14:12:47 crc kubenswrapper[5108]: I1212 14:12:47.319404 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nng8g\" (UniqueName: \"kubernetes.io/projected/81f2fcb3-b42d-4bb0-9cb0-17ca67aaa401-kube-api-access-nng8g\") pod \"redhat-marketplace-ljwkh\" (UID: \"81f2fcb3-b42d-4bb0-9cb0-17ca67aaa401\") " pod="openshift-marketplace/redhat-marketplace-ljwkh" Dec 12 14:12:47 crc kubenswrapper[5108]: E1212 14:12:47.319486 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:47.819465382 +0000 UTC m=+120.727456541 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hdk9b" (UID: "78c92fa7-6dbe-4fef-8495-6dc6fe162b22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:47 crc kubenswrapper[5108]: I1212 14:12:47.423722 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:47 crc kubenswrapper[5108]: E1212 14:12:47.423882 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:47.923841243 +0000 UTC m=+120.831832402 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:47 crc kubenswrapper[5108]: I1212 14:12:47.425398 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81f2fcb3-b42d-4bb0-9cb0-17ca67aaa401-utilities\") pod \"redhat-marketplace-ljwkh\" (UID: \"81f2fcb3-b42d-4bb0-9cb0-17ca67aaa401\") " pod="openshift-marketplace/redhat-marketplace-ljwkh" Dec 12 14:12:47 crc kubenswrapper[5108]: I1212 14:12:47.425514 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81f2fcb3-b42d-4bb0-9cb0-17ca67aaa401-catalog-content\") pod \"redhat-marketplace-ljwkh\" (UID: \"81f2fcb3-b42d-4bb0-9cb0-17ca67aaa401\") " pod="openshift-marketplace/redhat-marketplace-ljwkh" Dec 12 14:12:47 crc kubenswrapper[5108]: I1212 14:12:47.425677 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-nng8g\" (UniqueName: \"kubernetes.io/projected/81f2fcb3-b42d-4bb0-9cb0-17ca67aaa401-kube-api-access-nng8g\") pod \"redhat-marketplace-ljwkh\" (UID: \"81f2fcb3-b42d-4bb0-9cb0-17ca67aaa401\") " pod="openshift-marketplace/redhat-marketplace-ljwkh" Dec 12 14:12:47 crc kubenswrapper[5108]: I1212 14:12:47.425858 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hdk9b\" (UID: \"78c92fa7-6dbe-4fef-8495-6dc6fe162b22\") " pod="openshift-image-registry/image-registry-66587d64c8-hdk9b" Dec 12 14:12:47 crc kubenswrapper[5108]: E1212 14:12:47.426352 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:47.92632413 +0000 UTC m=+120.834315289 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hdk9b" (UID: "78c92fa7-6dbe-4fef-8495-6dc6fe162b22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:47 crc kubenswrapper[5108]: I1212 14:12:47.428278 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81f2fcb3-b42d-4bb0-9cb0-17ca67aaa401-utilities\") pod \"redhat-marketplace-ljwkh\" (UID: \"81f2fcb3-b42d-4bb0-9cb0-17ca67aaa401\") " pod="openshift-marketplace/redhat-marketplace-ljwkh" Dec 12 14:12:47 crc kubenswrapper[5108]: I1212 14:12:47.428802 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81f2fcb3-b42d-4bb0-9cb0-17ca67aaa401-catalog-content\") pod \"redhat-marketplace-ljwkh\" (UID: \"81f2fcb3-b42d-4bb0-9cb0-17ca67aaa401\") " pod="openshift-marketplace/redhat-marketplace-ljwkh" Dec 12 14:12:47 crc kubenswrapper[5108]: I1212 14:12:47.453410 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-nng8g\" (UniqueName: \"kubernetes.io/projected/81f2fcb3-b42d-4bb0-9cb0-17ca67aaa401-kube-api-access-nng8g\") pod \"redhat-marketplace-ljwkh\" (UID: \"81f2fcb3-b42d-4bb0-9cb0-17ca67aaa401\") " pod="openshift-marketplace/redhat-marketplace-ljwkh" Dec 12 14:12:47 crc kubenswrapper[5108]: I1212 14:12:47.516766 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ljwkh" Dec 12 14:12:47 crc kubenswrapper[5108]: I1212 14:12:47.527001 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:47 crc kubenswrapper[5108]: E1212 14:12:47.527202 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:48.027160036 +0000 UTC m=+120.935151215 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:47 crc kubenswrapper[5108]: I1212 14:12:47.527767 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hdk9b\" (UID: \"78c92fa7-6dbe-4fef-8495-6dc6fe162b22\") " pod="openshift-image-registry/image-registry-66587d64c8-hdk9b" Dec 12 14:12:47 crc kubenswrapper[5108]: E1212 14:12:47.528231 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:48.028213623 +0000 UTC m=+120.936204782 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hdk9b" (UID: "78c92fa7-6dbe-4fef-8495-6dc6fe162b22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:47 crc kubenswrapper[5108]: I1212 14:12:47.630295 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:47 crc kubenswrapper[5108]: E1212 14:12:47.631583 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:48.131255779 +0000 UTC m=+121.039246938 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:47 crc kubenswrapper[5108]: I1212 14:12:47.732577 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hdk9b\" (UID: \"78c92fa7-6dbe-4fef-8495-6dc6fe162b22\") " pod="openshift-image-registry/image-registry-66587d64c8-hdk9b" Dec 12 14:12:47 crc kubenswrapper[5108]: E1212 14:12:47.732856 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:48.232844384 +0000 UTC m=+121.140835543 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hdk9b" (UID: "78c92fa7-6dbe-4fef-8495-6dc6fe162b22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:47 crc kubenswrapper[5108]: I1212 14:12:47.833337 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:47 crc kubenswrapper[5108]: E1212 14:12:47.833534 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:48.333505416 +0000 UTC m=+121.241496585 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:47 crc kubenswrapper[5108]: I1212 14:12:47.833720 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hdk9b\" (UID: \"78c92fa7-6dbe-4fef-8495-6dc6fe162b22\") " pod="openshift-image-registry/image-registry-66587d64c8-hdk9b" Dec 12 14:12:47 crc kubenswrapper[5108]: E1212 14:12:47.834035 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:48.33402249 +0000 UTC m=+121.242013649 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hdk9b" (UID: "78c92fa7-6dbe-4fef-8495-6dc6fe162b22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:47 crc kubenswrapper[5108]: I1212 14:12:47.914505 5108 generic.go:358] "Generic (PLEG): container finished" podID="9d7df9af-f2f2-4ff9-a6be-f3b20aa4f91b" containerID="53a8c3d2a662cb73bfac4fe573850e730e9e334d891560c58bf36dad32fae0f4" exitCode=0 Dec 12 14:12:47 crc kubenswrapper[5108]: I1212 14:12:47.914794 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ddbtk" event={"ID":"9d7df9af-f2f2-4ff9-a6be-f3b20aa4f91b","Type":"ContainerDied","Data":"53a8c3d2a662cb73bfac4fe573850e730e9e334d891560c58bf36dad32fae0f4"} Dec 12 14:12:47 crc kubenswrapper[5108]: I1212 14:12:47.914879 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ddbtk" event={"ID":"9d7df9af-f2f2-4ff9-a6be-f3b20aa4f91b","Type":"ContainerStarted","Data":"0e31086cafa9b75dc3c21944198504050830f8e4516fd5e9877ff4bf500801a3"} Dec 12 14:12:47 crc kubenswrapper[5108]: I1212 14:12:47.918329 5108 generic.go:358] "Generic (PLEG): container finished" podID="d9418022-cc61-44ab-99a4-4afbf84fad60" containerID="009a6f04e60c427b660d8fc803edd2ae7985cb916c6badc6f670ae87be833086" exitCode=0 Dec 12 14:12:47 crc kubenswrapper[5108]: I1212 14:12:47.918386 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zp6xr" event={"ID":"d9418022-cc61-44ab-99a4-4afbf84fad60","Type":"ContainerDied","Data":"009a6f04e60c427b660d8fc803edd2ae7985cb916c6badc6f670ae87be833086"} Dec 12 14:12:47 crc kubenswrapper[5108]: I1212 14:12:47.934813 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:47 crc kubenswrapper[5108]: E1212 14:12:47.935005 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:48.434978169 +0000 UTC m=+121.342969328 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:47 crc kubenswrapper[5108]: I1212 14:12:47.935135 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hdk9b\" (UID: \"78c92fa7-6dbe-4fef-8495-6dc6fe162b22\") " pod="openshift-image-registry/image-registry-66587d64c8-hdk9b" Dec 12 14:12:47 crc kubenswrapper[5108]: E1212 14:12:47.935680 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:48.435666947 +0000 UTC m=+121.343658106 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hdk9b" (UID: "78c92fa7-6dbe-4fef-8495-6dc6fe162b22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:48 crc kubenswrapper[5108]: I1212 14:12:48.007266 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-bfb2d"] Dec 12 14:12:48 crc kubenswrapper[5108]: I1212 14:12:48.036818 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:48 crc kubenswrapper[5108]: E1212 14:12:48.036994 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:48.536970546 +0000 UTC m=+121.444961705 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:48 crc kubenswrapper[5108]: I1212 14:12:48.037330 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hdk9b\" (UID: \"78c92fa7-6dbe-4fef-8495-6dc6fe162b22\") " pod="openshift-image-registry/image-registry-66587d64c8-hdk9b" Dec 12 14:12:48 crc kubenswrapper[5108]: E1212 14:12:48.037687 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:48.537676045 +0000 UTC m=+121.445667204 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hdk9b" (UID: "78c92fa7-6dbe-4fef-8495-6dc6fe162b22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:48 crc kubenswrapper[5108]: I1212 14:12:48.138798 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:48 crc kubenswrapper[5108]: E1212 14:12:48.139264 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:48.639241011 +0000 UTC m=+121.547232170 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:48 crc kubenswrapper[5108]: I1212 14:12:48.139566 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hdk9b\" (UID: \"78c92fa7-6dbe-4fef-8495-6dc6fe162b22\") " pod="openshift-image-registry/image-registry-66587d64c8-hdk9b" Dec 12 14:12:48 crc kubenswrapper[5108]: E1212 14:12:48.140179 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:48.640155895 +0000 UTC m=+121.548147054 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hdk9b" (UID: "78c92fa7-6dbe-4fef-8495-6dc6fe162b22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:48 crc kubenswrapper[5108]: I1212 14:12:48.241166 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:48 crc kubenswrapper[5108]: E1212 14:12:48.241352 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:48.74132644 +0000 UTC m=+121.649317599 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:48 crc kubenswrapper[5108]: I1212 14:12:48.241667 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hdk9b\" (UID: \"78c92fa7-6dbe-4fef-8495-6dc6fe162b22\") " pod="openshift-image-registry/image-registry-66587d64c8-hdk9b" Dec 12 14:12:48 crc kubenswrapper[5108]: E1212 14:12:48.241955 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:48.741943226 +0000 UTC m=+121.649934385 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hdk9b" (UID: "78c92fa7-6dbe-4fef-8495-6dc6fe162b22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:48 crc kubenswrapper[5108]: I1212 14:12:48.342485 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:48 crc kubenswrapper[5108]: E1212 14:12:48.342692 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:48.842665439 +0000 UTC m=+121.750656598 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:48 crc kubenswrapper[5108]: I1212 14:12:48.342761 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hdk9b\" (UID: \"78c92fa7-6dbe-4fef-8495-6dc6fe162b22\") " pod="openshift-image-registry/image-registry-66587d64c8-hdk9b" Dec 12 14:12:48 crc kubenswrapper[5108]: E1212 14:12:48.343035 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:48.843028539 +0000 UTC m=+121.751019698 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hdk9b" (UID: "78c92fa7-6dbe-4fef-8495-6dc6fe162b22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:48 crc kubenswrapper[5108]: I1212 14:12:48.444454 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:48 crc kubenswrapper[5108]: E1212 14:12:48.444644 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:48.944617595 +0000 UTC m=+121.852608754 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:48 crc kubenswrapper[5108]: I1212 14:12:48.444976 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hdk9b\" (UID: \"78c92fa7-6dbe-4fef-8495-6dc6fe162b22\") " pod="openshift-image-registry/image-registry-66587d64c8-hdk9b" Dec 12 14:12:48 crc kubenswrapper[5108]: E1212 14:12:48.445394 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:48.945378315 +0000 UTC m=+121.853369484 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hdk9b" (UID: "78c92fa7-6dbe-4fef-8495-6dc6fe162b22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:48 crc kubenswrapper[5108]: I1212 14:12:48.546190 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:48 crc kubenswrapper[5108]: E1212 14:12:48.546368 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:49.046343294 +0000 UTC m=+121.954334473 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:48 crc kubenswrapper[5108]: I1212 14:12:48.546940 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hdk9b\" (UID: \"78c92fa7-6dbe-4fef-8495-6dc6fe162b22\") " pod="openshift-image-registry/image-registry-66587d64c8-hdk9b" Dec 12 14:12:48 crc kubenswrapper[5108]: E1212 14:12:48.547241 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:49.047231149 +0000 UTC m=+121.955222308 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hdk9b" (UID: "78c92fa7-6dbe-4fef-8495-6dc6fe162b22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:48 crc kubenswrapper[5108]: I1212 14:12:48.647828 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:48 crc kubenswrapper[5108]: E1212 14:12:48.648048 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:49.147983312 +0000 UTC m=+122.055974471 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:48 crc kubenswrapper[5108]: I1212 14:12:48.648315 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hdk9b\" (UID: \"78c92fa7-6dbe-4fef-8495-6dc6fe162b22\") " pod="openshift-image-registry/image-registry-66587d64c8-hdk9b" Dec 12 14:12:48 crc kubenswrapper[5108]: E1212 14:12:48.648623 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:49.14861446 +0000 UTC m=+122.056605619 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hdk9b" (UID: "78c92fa7-6dbe-4fef-8495-6dc6fe162b22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:48 crc kubenswrapper[5108]: I1212 14:12:48.756852 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:48 crc kubenswrapper[5108]: E1212 14:12:48.757041 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:49.257001728 +0000 UTC m=+122.164992887 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:48 crc kubenswrapper[5108]: I1212 14:12:48.757608 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hdk9b\" (UID: \"78c92fa7-6dbe-4fef-8495-6dc6fe162b22\") " pod="openshift-image-registry/image-registry-66587d64c8-hdk9b" Dec 12 14:12:48 crc kubenswrapper[5108]: E1212 14:12:48.758392 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:49.258339144 +0000 UTC m=+122.166330303 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hdk9b" (UID: "78c92fa7-6dbe-4fef-8495-6dc6fe162b22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:48 crc kubenswrapper[5108]: I1212 14:12:48.859299 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:48 crc kubenswrapper[5108]: E1212 14:12:48.859510 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:49.359478308 +0000 UTC m=+122.267469467 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:48 crc kubenswrapper[5108]: I1212 14:12:48.859910 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hdk9b\" (UID: \"78c92fa7-6dbe-4fef-8495-6dc6fe162b22\") " pod="openshift-image-registry/image-registry-66587d64c8-hdk9b" Dec 12 14:12:48 crc kubenswrapper[5108]: E1212 14:12:48.860274 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:49.360257269 +0000 UTC m=+122.268248428 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hdk9b" (UID: "78c92fa7-6dbe-4fef-8495-6dc6fe162b22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:48 crc kubenswrapper[5108]: I1212 14:12:48.962336 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:48 crc kubenswrapper[5108]: E1212 14:12:48.962565 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:49.462529914 +0000 UTC m=+122.370521083 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:48 crc kubenswrapper[5108]: I1212 14:12:48.962903 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hdk9b\" (UID: \"78c92fa7-6dbe-4fef-8495-6dc6fe162b22\") " pod="openshift-image-registry/image-registry-66587d64c8-hdk9b" Dec 12 14:12:48 crc kubenswrapper[5108]: E1212 14:12:48.963256 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:49.463242962 +0000 UTC m=+122.371234121 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hdk9b" (UID: "78c92fa7-6dbe-4fef-8495-6dc6fe162b22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:49 crc kubenswrapper[5108]: I1212 14:12:49.066207 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:49 crc kubenswrapper[5108]: E1212 14:12:49.066453 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:49.566413641 +0000 UTC m=+122.474404940 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:49 crc kubenswrapper[5108]: I1212 14:12:49.067166 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hdk9b\" (UID: \"78c92fa7-6dbe-4fef-8495-6dc6fe162b22\") " pod="openshift-image-registry/image-registry-66587d64c8-hdk9b" Dec 12 14:12:49 crc kubenswrapper[5108]: E1212 14:12:49.067567 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:49.567547762 +0000 UTC m=+122.475538981 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hdk9b" (UID: "78c92fa7-6dbe-4fef-8495-6dc6fe162b22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:49 crc kubenswrapper[5108]: I1212 14:12:49.078346 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bfb2d"] Dec 12 14:12:49 crc kubenswrapper[5108]: I1212 14:12:49.078411 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ljwkh"] Dec 12 14:12:49 crc kubenswrapper[5108]: I1212 14:12:49.078514 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 14:12:49 crc kubenswrapper[5108]: I1212 14:12:49.078559 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ljwkh" event={"ID":"81f2fcb3-b42d-4bb0-9cb0-17ca67aaa401","Type":"ContainerStarted","Data":"609328d82ab0590e9acd5198f0a960395486655ef1ad0ee24b4a3a51c7d2cb4e"} Dec 12 14:12:49 crc kubenswrapper[5108]: I1212 14:12:49.078725 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-68cf44c8b8-lcqd6" Dec 12 14:12:49 crc kubenswrapper[5108]: I1212 14:12:49.078745 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Dec 12 14:12:49 crc kubenswrapper[5108]: I1212 14:12:49.078761 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bfb2d" Dec 12 14:12:49 crc kubenswrapper[5108]: I1212 14:12:49.081126 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Dec 12 14:12:49 crc kubenswrapper[5108]: I1212 14:12:49.136221 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-68cf44c8b8-lcqd6" Dec 12 14:12:49 crc kubenswrapper[5108]: I1212 14:12:49.136264 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Dec 12 14:12:49 crc kubenswrapper[5108]: I1212 14:12:49.136282 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-r4bxk"] Dec 12 14:12:49 crc kubenswrapper[5108]: I1212 14:12:49.137890 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 12 14:12:49 crc kubenswrapper[5108]: I1212 14:12:49.140493 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler\"/\"installer-sa-dockercfg-qpkss\"" Dec 12 14:12:49 crc kubenswrapper[5108]: I1212 14:12:49.140732 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler\"/\"kube-root-ca.crt\"" Dec 12 14:12:49 crc kubenswrapper[5108]: I1212 14:12:49.168595 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:49 crc kubenswrapper[5108]: I1212 14:12:49.168911 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b05ea99a-815e-48ce-b4bb-1efda1405964-utilities\") pod \"redhat-operators-bfb2d\" (UID: \"b05ea99a-815e-48ce-b4bb-1efda1405964\") " pod="openshift-marketplace/redhat-operators-bfb2d" Dec 12 14:12:49 crc kubenswrapper[5108]: E1212 14:12:49.169016 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:49.668989873 +0000 UTC m=+122.576981092 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:49 crc kubenswrapper[5108]: I1212 14:12:49.169304 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b05ea99a-815e-48ce-b4bb-1efda1405964-catalog-content\") pod \"redhat-operators-bfb2d\" (UID: \"b05ea99a-815e-48ce-b4bb-1efda1405964\") " pod="openshift-marketplace/redhat-operators-bfb2d" Dec 12 14:12:49 crc kubenswrapper[5108]: I1212 14:12:49.169404 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkq4k\" (UniqueName: \"kubernetes.io/projected/b05ea99a-815e-48ce-b4bb-1efda1405964-kube-api-access-xkq4k\") pod \"redhat-operators-bfb2d\" (UID: \"b05ea99a-815e-48ce-b4bb-1efda1405964\") " pod="openshift-marketplace/redhat-operators-bfb2d" Dec 12 14:12:49 crc kubenswrapper[5108]: I1212 14:12:49.169543 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hdk9b\" (UID: \"78c92fa7-6dbe-4fef-8495-6dc6fe162b22\") " pod="openshift-image-registry/image-registry-66587d64c8-hdk9b" Dec 12 14:12:49 crc kubenswrapper[5108]: E1212 14:12:49.169805 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:49.669798376 +0000 UTC m=+122.577789535 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hdk9b" (UID: "78c92fa7-6dbe-4fef-8495-6dc6fe162b22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:49 crc kubenswrapper[5108]: I1212 14:12:49.180849 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-r4bxk"] Dec 12 14:12:49 crc kubenswrapper[5108]: I1212 14:12:49.181049 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r4bxk" Dec 12 14:12:49 crc kubenswrapper[5108]: I1212 14:12:49.366675 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:49 crc kubenswrapper[5108]: E1212 14:12:49.366767 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:49.86674983 +0000 UTC m=+122.774740989 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:49 crc kubenswrapper[5108]: I1212 14:12:49.366903 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b05ea99a-815e-48ce-b4bb-1efda1405964-utilities\") pod \"redhat-operators-bfb2d\" (UID: \"b05ea99a-815e-48ce-b4bb-1efda1405964\") " pod="openshift-marketplace/redhat-operators-bfb2d" Dec 12 14:12:49 crc kubenswrapper[5108]: I1212 14:12:49.366982 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b05ea99a-815e-48ce-b4bb-1efda1405964-catalog-content\") pod \"redhat-operators-bfb2d\" (UID: \"b05ea99a-815e-48ce-b4bb-1efda1405964\") " pod="openshift-marketplace/redhat-operators-bfb2d" Dec 12 14:12:49 crc kubenswrapper[5108]: I1212 14:12:49.367002 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snpcc\" (UniqueName: \"kubernetes.io/projected/aa7c7c10-bd3b-4044-8aa8-7875e9f908a7-kube-api-access-snpcc\") pod \"redhat-operators-r4bxk\" (UID: \"aa7c7c10-bd3b-4044-8aa8-7875e9f908a7\") " pod="openshift-marketplace/redhat-operators-r4bxk" Dec 12 14:12:49 crc kubenswrapper[5108]: I1212 14:12:49.367035 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ddb7d99c-8f6c-4794-b9d8-83b248bae45f-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"ddb7d99c-8f6c-4794-b9d8-83b248bae45f\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 12 14:12:49 crc kubenswrapper[5108]: I1212 14:12:49.367052 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xkq4k\" (UniqueName: \"kubernetes.io/projected/b05ea99a-815e-48ce-b4bb-1efda1405964-kube-api-access-xkq4k\") pod \"redhat-operators-bfb2d\" (UID: \"b05ea99a-815e-48ce-b4bb-1efda1405964\") " pod="openshift-marketplace/redhat-operators-bfb2d" Dec 12 14:12:49 crc kubenswrapper[5108]: I1212 14:12:49.367073 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa7c7c10-bd3b-4044-8aa8-7875e9f908a7-catalog-content\") pod \"redhat-operators-r4bxk\" (UID: \"aa7c7c10-bd3b-4044-8aa8-7875e9f908a7\") " pod="openshift-marketplace/redhat-operators-r4bxk" Dec 12 14:12:49 crc kubenswrapper[5108]: I1212 14:12:49.367109 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa7c7c10-bd3b-4044-8aa8-7875e9f908a7-utilities\") pod \"redhat-operators-r4bxk\" (UID: \"aa7c7c10-bd3b-4044-8aa8-7875e9f908a7\") " pod="openshift-marketplace/redhat-operators-r4bxk" Dec 12 14:12:49 crc kubenswrapper[5108]: I1212 14:12:49.367145 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ddb7d99c-8f6c-4794-b9d8-83b248bae45f-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"ddb7d99c-8f6c-4794-b9d8-83b248bae45f\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 12 14:12:49 crc kubenswrapper[5108]: I1212 14:12:49.367169 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hdk9b\" (UID: \"78c92fa7-6dbe-4fef-8495-6dc6fe162b22\") " pod="openshift-image-registry/image-registry-66587d64c8-hdk9b" Dec 12 14:12:49 crc kubenswrapper[5108]: E1212 14:12:49.367398 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:49.867391948 +0000 UTC m=+122.775383107 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hdk9b" (UID: "78c92fa7-6dbe-4fef-8495-6dc6fe162b22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:49 crc kubenswrapper[5108]: I1212 14:12:49.367844 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b05ea99a-815e-48ce-b4bb-1efda1405964-utilities\") pod \"redhat-operators-bfb2d\" (UID: \"b05ea99a-815e-48ce-b4bb-1efda1405964\") " pod="openshift-marketplace/redhat-operators-bfb2d" Dec 12 14:12:49 crc kubenswrapper[5108]: I1212 14:12:49.368075 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b05ea99a-815e-48ce-b4bb-1efda1405964-catalog-content\") pod \"redhat-operators-bfb2d\" (UID: \"b05ea99a-815e-48ce-b4bb-1efda1405964\") " pod="openshift-marketplace/redhat-operators-bfb2d" Dec 12 14:12:49 crc kubenswrapper[5108]: E1212 14:12:49.389590 5108 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ab4cc74fd6f191761fb8e5e4a4df86e35a7ee6203fb2b8458d6522d41b931076" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 12 14:12:49 crc kubenswrapper[5108]: I1212 14:12:49.407503 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkq4k\" (UniqueName: \"kubernetes.io/projected/b05ea99a-815e-48ce-b4bb-1efda1405964-kube-api-access-xkq4k\") pod \"redhat-operators-bfb2d\" (UID: \"b05ea99a-815e-48ce-b4bb-1efda1405964\") " pod="openshift-marketplace/redhat-operators-bfb2d" Dec 12 14:12:49 crc kubenswrapper[5108]: E1212 14:12:49.431340 5108 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ab4cc74fd6f191761fb8e5e4a4df86e35a7ee6203fb2b8458d6522d41b931076" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 12 14:12:49 crc kubenswrapper[5108]: E1212 14:12:49.433502 5108 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ab4cc74fd6f191761fb8e5e4a4df86e35a7ee6203fb2b8458d6522d41b931076" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 12 14:12:49 crc kubenswrapper[5108]: E1212 14:12:49.433580 5108 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-9bn92" podUID="fab2bff9-a63d-4213-b55b-c19d14831aa5" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Dec 12 14:12:49 crc kubenswrapper[5108]: I1212 14:12:49.468004 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:49 crc kubenswrapper[5108]: I1212 14:12:49.468187 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-snpcc\" (UniqueName: \"kubernetes.io/projected/aa7c7c10-bd3b-4044-8aa8-7875e9f908a7-kube-api-access-snpcc\") pod \"redhat-operators-r4bxk\" (UID: \"aa7c7c10-bd3b-4044-8aa8-7875e9f908a7\") " pod="openshift-marketplace/redhat-operators-r4bxk" Dec 12 14:12:49 crc kubenswrapper[5108]: I1212 14:12:49.468256 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ddb7d99c-8f6c-4794-b9d8-83b248bae45f-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"ddb7d99c-8f6c-4794-b9d8-83b248bae45f\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 12 14:12:49 crc kubenswrapper[5108]: I1212 14:12:49.468288 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa7c7c10-bd3b-4044-8aa8-7875e9f908a7-catalog-content\") pod \"redhat-operators-r4bxk\" (UID: \"aa7c7c10-bd3b-4044-8aa8-7875e9f908a7\") " pod="openshift-marketplace/redhat-operators-r4bxk" Dec 12 14:12:49 crc kubenswrapper[5108]: I1212 14:12:49.468313 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa7c7c10-bd3b-4044-8aa8-7875e9f908a7-utilities\") pod \"redhat-operators-r4bxk\" (UID: \"aa7c7c10-bd3b-4044-8aa8-7875e9f908a7\") " pod="openshift-marketplace/redhat-operators-r4bxk" Dec 12 14:12:49 crc kubenswrapper[5108]: I1212 14:12:49.468352 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ddb7d99c-8f6c-4794-b9d8-83b248bae45f-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"ddb7d99c-8f6c-4794-b9d8-83b248bae45f\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 12 14:12:49 crc kubenswrapper[5108]: I1212 14:12:49.468511 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ddb7d99c-8f6c-4794-b9d8-83b248bae45f-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"ddb7d99c-8f6c-4794-b9d8-83b248bae45f\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 12 14:12:49 crc kubenswrapper[5108]: E1212 14:12:49.468581 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:49.968566903 +0000 UTC m=+122.876558062 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:49 crc kubenswrapper[5108]: I1212 14:12:49.470305 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa7c7c10-bd3b-4044-8aa8-7875e9f908a7-utilities\") pod \"redhat-operators-r4bxk\" (UID: \"aa7c7c10-bd3b-4044-8aa8-7875e9f908a7\") " pod="openshift-marketplace/redhat-operators-r4bxk" Dec 12 14:12:49 crc kubenswrapper[5108]: I1212 14:12:49.470881 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa7c7c10-bd3b-4044-8aa8-7875e9f908a7-catalog-content\") pod \"redhat-operators-r4bxk\" (UID: \"aa7c7c10-bd3b-4044-8aa8-7875e9f908a7\") " pod="openshift-marketplace/redhat-operators-r4bxk" Dec 12 14:12:49 crc kubenswrapper[5108]: I1212 14:12:49.472390 5108 ???:1] "http: TLS handshake error from 192.168.126.11:58198: no serving certificate available for the kubelet" Dec 12 14:12:49 crc kubenswrapper[5108]: I1212 14:12:49.487465 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Dec 12 14:12:49 crc kubenswrapper[5108]: I1212 14:12:49.493280 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ddb7d99c-8f6c-4794-b9d8-83b248bae45f-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"ddb7d99c-8f6c-4794-b9d8-83b248bae45f\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 12 14:12:49 crc kubenswrapper[5108]: I1212 14:12:49.493365 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-snpcc\" (UniqueName: \"kubernetes.io/projected/aa7c7c10-bd3b-4044-8aa8-7875e9f908a7-kube-api-access-snpcc\") pod \"redhat-operators-r4bxk\" (UID: \"aa7c7c10-bd3b-4044-8aa8-7875e9f908a7\") " pod="openshift-marketplace/redhat-operators-r4bxk" Dec 12 14:12:49 crc kubenswrapper[5108]: I1212 14:12:49.511387 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r4bxk" Dec 12 14:12:49 crc kubenswrapper[5108]: I1212 14:12:49.570906 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hdk9b\" (UID: \"78c92fa7-6dbe-4fef-8495-6dc6fe162b22\") " pod="openshift-image-registry/image-registry-66587d64c8-hdk9b" Dec 12 14:12:49 crc kubenswrapper[5108]: E1212 14:12:49.571259 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:50.071247328 +0000 UTC m=+122.979238477 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hdk9b" (UID: "78c92fa7-6dbe-4fef-8495-6dc6fe162b22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:49 crc kubenswrapper[5108]: I1212 14:12:49.633148 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Dec 12 14:12:49 crc kubenswrapper[5108]: I1212 14:12:49.633324 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 12 14:12:49 crc kubenswrapper[5108]: I1212 14:12:49.635486 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver\"/\"installer-sa-dockercfg-bqqnb\"" Dec 12 14:12:49 crc kubenswrapper[5108]: I1212 14:12:49.635746 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver\"/\"kube-root-ca.crt\"" Dec 12 14:12:49 crc kubenswrapper[5108]: I1212 14:12:49.654491 5108 patch_prober.go:28] interesting pod/downloads-747b44746d-hrj8v container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.30:8080/\": dial tcp 10.217.0.30:8080: connect: connection refused" start-of-body= Dec 12 14:12:49 crc kubenswrapper[5108]: I1212 14:12:49.654538 5108 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-hrj8v" podUID="29c8dcea-f999-4e33-9f5e-ef9eb8a423f7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.30:8080/\": dial tcp 10.217.0.30:8080: connect: connection refused" Dec 12 14:12:49 crc kubenswrapper[5108]: I1212 14:12:49.675447 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:49 crc kubenswrapper[5108]: I1212 14:12:49.675878 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1907d985-7075-4b2e-a55e-b0b009af5954-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"1907d985-7075-4b2e-a55e-b0b009af5954\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 12 14:12:49 crc kubenswrapper[5108]: I1212 14:12:49.676005 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1907d985-7075-4b2e-a55e-b0b009af5954-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"1907d985-7075-4b2e-a55e-b0b009af5954\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 12 14:12:49 crc kubenswrapper[5108]: E1212 14:12:49.676104 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:50.176069171 +0000 UTC m=+123.084060330 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:49 crc kubenswrapper[5108]: I1212 14:12:49.676314 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hdk9b\" (UID: \"78c92fa7-6dbe-4fef-8495-6dc6fe162b22\") " pod="openshift-image-registry/image-registry-66587d64c8-hdk9b" Dec 12 14:12:49 crc kubenswrapper[5108]: E1212 14:12:49.676917 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:50.176902824 +0000 UTC m=+123.084893983 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hdk9b" (UID: "78c92fa7-6dbe-4fef-8495-6dc6fe162b22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:49 crc kubenswrapper[5108]: I1212 14:12:49.693785 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bfb2d" Dec 12 14:12:49 crc kubenswrapper[5108]: I1212 14:12:49.721733 5108 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Dec 12 14:12:49 crc kubenswrapper[5108]: I1212 14:12:49.777383 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:49 crc kubenswrapper[5108]: I1212 14:12:49.777577 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1907d985-7075-4b2e-a55e-b0b009af5954-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"1907d985-7075-4b2e-a55e-b0b009af5954\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 12 14:12:49 crc kubenswrapper[5108]: I1212 14:12:49.777644 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1907d985-7075-4b2e-a55e-b0b009af5954-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"1907d985-7075-4b2e-a55e-b0b009af5954\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 12 14:12:49 crc kubenswrapper[5108]: E1212 14:12:49.778145 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:50.27812831 +0000 UTC m=+123.186119469 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:49 crc kubenswrapper[5108]: I1212 14:12:49.778194 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1907d985-7075-4b2e-a55e-b0b009af5954-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"1907d985-7075-4b2e-a55e-b0b009af5954\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 12 14:12:49 crc kubenswrapper[5108]: I1212 14:12:49.793286 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 12 14:12:49 crc kubenswrapper[5108]: I1212 14:12:49.799189 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-r4bxk"] Dec 12 14:12:49 crc kubenswrapper[5108]: I1212 14:12:49.799792 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1907d985-7075-4b2e-a55e-b0b009af5954-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"1907d985-7075-4b2e-a55e-b0b009af5954\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 12 14:12:49 crc kubenswrapper[5108]: W1212 14:12:49.855670 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaa7c7c10_bd3b_4044_8aa8_7875e9f908a7.slice/crio-308b4d885c432ea4adecea1a46074b66e94873858018cec2eaaa77b05a0ce336 WatchSource:0}: Error finding container 308b4d885c432ea4adecea1a46074b66e94873858018cec2eaaa77b05a0ce336: Status 404 returned error can't find the container with id 308b4d885c432ea4adecea1a46074b66e94873858018cec2eaaa77b05a0ce336 Dec 12 14:12:49 crc kubenswrapper[5108]: I1212 14:12:49.878972 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hdk9b\" (UID: \"78c92fa7-6dbe-4fef-8495-6dc6fe162b22\") " pod="openshift-image-registry/image-registry-66587d64c8-hdk9b" Dec 12 14:12:49 crc kubenswrapper[5108]: E1212 14:12:49.879527 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:50.379511001 +0000 UTC m=+123.287502160 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hdk9b" (UID: "78c92fa7-6dbe-4fef-8495-6dc6fe162b22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:49 crc kubenswrapper[5108]: I1212 14:12:49.938685 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r4bxk" event={"ID":"aa7c7c10-bd3b-4044-8aa8-7875e9f908a7","Type":"ContainerStarted","Data":"308b4d885c432ea4adecea1a46074b66e94873858018cec2eaaa77b05a0ce336"} Dec 12 14:12:49 crc kubenswrapper[5108]: I1212 14:12:49.940166 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-gw5g9" event={"ID":"c37b74b9-0534-4df9-9af9-a1c10e3a9b89","Type":"ContainerStarted","Data":"d7e65d8b5cb05051c2a047cc871abaceb4c4f335813ce56d00ecdc2420480a1a"} Dec 12 14:12:49 crc kubenswrapper[5108]: I1212 14:12:49.940190 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-gw5g9" event={"ID":"c37b74b9-0534-4df9-9af9-a1c10e3a9b89","Type":"ContainerStarted","Data":"97e2d07cdf33a563e8a5b1fe51658eebc16143c79949161ffcbf9e6ac6bde0d7"} Dec 12 14:12:49 crc kubenswrapper[5108]: I1212 14:12:49.941753 5108 generic.go:358] "Generic (PLEG): container finished" podID="81f2fcb3-b42d-4bb0-9cb0-17ca67aaa401" containerID="34cab8602f4c092930f78ec38fef52b4cad3e4bfedc1fa439269412a99031046" exitCode=0 Dec 12 14:12:49 crc kubenswrapper[5108]: I1212 14:12:49.942818 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ljwkh" event={"ID":"81f2fcb3-b42d-4bb0-9cb0-17ca67aaa401","Type":"ContainerDied","Data":"34cab8602f4c092930f78ec38fef52b4cad3e4bfedc1fa439269412a99031046"} Dec 12 14:12:49 crc kubenswrapper[5108]: I1212 14:12:49.955224 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 12 14:12:49 crc kubenswrapper[5108]: I1212 14:12:49.983399 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:49 crc kubenswrapper[5108]: E1212 14:12:49.983980 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:50.483944433 +0000 UTC m=+123.391935592 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:49 crc kubenswrapper[5108]: I1212 14:12:49.985022 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hdk9b\" (UID: \"78c92fa7-6dbe-4fef-8495-6dc6fe162b22\") " pod="openshift-image-registry/image-registry-66587d64c8-hdk9b" Dec 12 14:12:49 crc kubenswrapper[5108]: E1212 14:12:49.985603 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:50.485579476 +0000 UTC m=+123.393570645 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hdk9b" (UID: "78c92fa7-6dbe-4fef-8495-6dc6fe162b22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:50 crc kubenswrapper[5108]: I1212 14:12:50.087040 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:50 crc kubenswrapper[5108]: E1212 14:12:50.087372 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:50.587339527 +0000 UTC m=+123.495330696 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:50 crc kubenswrapper[5108]: I1212 14:12:50.087613 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hdk9b\" (UID: \"78c92fa7-6dbe-4fef-8495-6dc6fe162b22\") " pod="openshift-image-registry/image-registry-66587d64c8-hdk9b" Dec 12 14:12:50 crc kubenswrapper[5108]: E1212 14:12:50.087949 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:50.587937014 +0000 UTC m=+123.495928173 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hdk9b" (UID: "78c92fa7-6dbe-4fef-8495-6dc6fe162b22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:50 crc kubenswrapper[5108]: I1212 14:12:50.189479 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:50 crc kubenswrapper[5108]: E1212 14:12:50.189693 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:50.689661663 +0000 UTC m=+123.597652822 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:50 crc kubenswrapper[5108]: I1212 14:12:50.189902 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hdk9b\" (UID: \"78c92fa7-6dbe-4fef-8495-6dc6fe162b22\") " pod="openshift-image-registry/image-registry-66587d64c8-hdk9b" Dec 12 14:12:50 crc kubenswrapper[5108]: E1212 14:12:50.190280 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:50.69026617 +0000 UTC m=+123.598257329 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hdk9b" (UID: "78c92fa7-6dbe-4fef-8495-6dc6fe162b22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:50 crc kubenswrapper[5108]: I1212 14:12:50.267825 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bfb2d"] Dec 12 14:12:50 crc kubenswrapper[5108]: I1212 14:12:50.290699 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:50 crc kubenswrapper[5108]: E1212 14:12:50.291746 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:50.791730562 +0000 UTC m=+123.699721721 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:50 crc kubenswrapper[5108]: W1212 14:12:50.292522 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb05ea99a_815e_48ce_b4bb_1efda1405964.slice/crio-7850374c1e90f0cc1f39c46484c75677e78c8279d1653fda4f94dd2cf5983edd WatchSource:0}: Error finding container 7850374c1e90f0cc1f39c46484c75677e78c8279d1653fda4f94dd2cf5983edd: Status 404 returned error can't find the container with id 7850374c1e90f0cc1f39c46484c75677e78c8279d1653fda4f94dd2cf5983edd Dec 12 14:12:50 crc kubenswrapper[5108]: I1212 14:12:50.340343 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Dec 12 14:12:50 crc kubenswrapper[5108]: W1212 14:12:50.365616 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podddb7d99c_8f6c_4794_b9d8_83b248bae45f.slice/crio-b1d60e247f02c43a0a4791b6d2e2e600e7388f1cf5ff9efb89eb6659fc160129 WatchSource:0}: Error finding container b1d60e247f02c43a0a4791b6d2e2e600e7388f1cf5ff9efb89eb6659fc160129: Status 404 returned error can't find the container with id b1d60e247f02c43a0a4791b6d2e2e600e7388f1cf5ff9efb89eb6659fc160129 Dec 12 14:12:50 crc kubenswrapper[5108]: I1212 14:12:50.395296 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hdk9b\" (UID: \"78c92fa7-6dbe-4fef-8495-6dc6fe162b22\") " pod="openshift-image-registry/image-registry-66587d64c8-hdk9b" Dec 12 14:12:50 crc kubenswrapper[5108]: E1212 14:12:50.395709 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:50.895696562 +0000 UTC m=+123.803687721 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hdk9b" (UID: "78c92fa7-6dbe-4fef-8495-6dc6fe162b22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:50 crc kubenswrapper[5108]: I1212 14:12:50.496995 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:50 crc kubenswrapper[5108]: E1212 14:12:50.497190 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:50.997157435 +0000 UTC m=+123.905148594 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:50 crc kubenswrapper[5108]: I1212 14:12:50.497530 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hdk9b\" (UID: \"78c92fa7-6dbe-4fef-8495-6dc6fe162b22\") " pod="openshift-image-registry/image-registry-66587d64c8-hdk9b" Dec 12 14:12:50 crc kubenswrapper[5108]: E1212 14:12:50.497884 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:50.997865943 +0000 UTC m=+123.905857102 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hdk9b" (UID: "78c92fa7-6dbe-4fef-8495-6dc6fe162b22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:50 crc kubenswrapper[5108]: I1212 14:12:50.570541 5108 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-12-12T14:12:49.721760877Z","UUID":"d8444a83-8708-40bf-b6ba-053f2fe892c6","Handler":null,"Name":"","Endpoint":""} Dec 12 14:12:50 crc kubenswrapper[5108]: I1212 14:12:50.574582 5108 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Dec 12 14:12:50 crc kubenswrapper[5108]: I1212 14:12:50.574620 5108 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Dec 12 14:12:50 crc kubenswrapper[5108]: I1212 14:12:50.577187 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Dec 12 14:12:50 crc kubenswrapper[5108]: I1212 14:12:50.598436 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:50 crc kubenswrapper[5108]: W1212 14:12:50.599603 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod1907d985_7075_4b2e_a55e_b0b009af5954.slice/crio-b55f32216b79cbb94ac43f20c575b41f8710c7bba9af6899c4ef55bc060b8c40 WatchSource:0}: Error finding container b55f32216b79cbb94ac43f20c575b41f8710c7bba9af6899c4ef55bc060b8c40: Status 404 returned error can't find the container with id b55f32216b79cbb94ac43f20c575b41f8710c7bba9af6899c4ef55bc060b8c40 Dec 12 14:12:50 crc kubenswrapper[5108]: I1212 14:12:50.602656 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (OuterVolumeSpecName: "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2". PluginName "kubernetes.io/csi", VolumeGIDValue "" Dec 12 14:12:50 crc kubenswrapper[5108]: I1212 14:12:50.700021 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hdk9b\" (UID: \"78c92fa7-6dbe-4fef-8495-6dc6fe162b22\") " pod="openshift-image-registry/image-registry-66587d64c8-hdk9b" Dec 12 14:12:50 crc kubenswrapper[5108]: I1212 14:12:50.779603 5108 csi_attacher.go:373] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 12 14:12:50 crc kubenswrapper[5108]: I1212 14:12:50.779657 5108 operation_generator.go:557] "MountVolume.MountDevice succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hdk9b\" (UID: \"78c92fa7-6dbe-4fef-8495-6dc6fe162b22\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b1264ac67579ad07e7e9003054d44fe40dd55285a4b2f7dc74e48be1aee0868a/globalmount\"" pod="openshift-image-registry/image-registry-66587d64c8-hdk9b" Dec 12 14:12:50 crc kubenswrapper[5108]: I1212 14:12:50.804898 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hdk9b\" (UID: \"78c92fa7-6dbe-4fef-8495-6dc6fe162b22\") " pod="openshift-image-registry/image-registry-66587d64c8-hdk9b" Dec 12 14:12:50 crc kubenswrapper[5108]: I1212 14:12:50.839878 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-6w67b\"" Dec 12 14:12:50 crc kubenswrapper[5108]: I1212 14:12:50.848822 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-hdk9b" Dec 12 14:12:50 crc kubenswrapper[5108]: I1212 14:12:50.959839 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"1907d985-7075-4b2e-a55e-b0b009af5954","Type":"ContainerStarted","Data":"b55f32216b79cbb94ac43f20c575b41f8710c7bba9af6899c4ef55bc060b8c40"} Dec 12 14:12:50 crc kubenswrapper[5108]: I1212 14:12:50.964474 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bfb2d" event={"ID":"b05ea99a-815e-48ce-b4bb-1efda1405964","Type":"ContainerStarted","Data":"7850374c1e90f0cc1f39c46484c75677e78c8279d1653fda4f94dd2cf5983edd"} Dec 12 14:12:50 crc kubenswrapper[5108]: I1212 14:12:50.966193 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"ddb7d99c-8f6c-4794-b9d8-83b248bae45f","Type":"ContainerStarted","Data":"b1d60e247f02c43a0a4791b6d2e2e600e7388f1cf5ff9efb89eb6659fc160129"} Dec 12 14:12:50 crc kubenswrapper[5108]: I1212 14:12:50.968060 5108 generic.go:358] "Generic (PLEG): container finished" podID="aa7c7c10-bd3b-4044-8aa8-7875e9f908a7" containerID="38e31e2ecab26e58df51647a33e82f81a72b42facc4db964adcd65d97d494346" exitCode=0 Dec 12 14:12:50 crc kubenswrapper[5108]: I1212 14:12:50.968130 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r4bxk" event={"ID":"aa7c7c10-bd3b-4044-8aa8-7875e9f908a7","Type":"ContainerDied","Data":"38e31e2ecab26e58df51647a33e82f81a72b42facc4db964adcd65d97d494346"} Dec 12 14:12:51 crc kubenswrapper[5108]: I1212 14:12:51.225737 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 14:12:51 crc kubenswrapper[5108]: I1212 14:12:51.226140 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 12 14:12:51 crc kubenswrapper[5108]: I1212 14:12:51.226227 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 14:12:51 crc kubenswrapper[5108]: I1212 14:12:51.226302 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 14:12:51 crc kubenswrapper[5108]: I1212 14:12:51.228262 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"networking-console-plugin-cert\"" Dec 12 14:12:51 crc kubenswrapper[5108]: I1212 14:12:51.228274 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-console\"/\"networking-console-plugin\"" Dec 12 14:12:51 crc kubenswrapper[5108]: I1212 14:12:51.229583 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Dec 12 14:12:51 crc kubenswrapper[5108]: I1212 14:12:51.238160 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Dec 12 14:12:51 crc kubenswrapper[5108]: I1212 14:12:51.242515 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 14:12:51 crc kubenswrapper[5108]: I1212 14:12:51.253232 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 14:12:51 crc kubenswrapper[5108]: I1212 14:12:51.257999 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 14:12:51 crc kubenswrapper[5108]: I1212 14:12:51.258538 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 12 14:12:51 crc kubenswrapper[5108]: I1212 14:12:51.341781 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 14:12:51 crc kubenswrapper[5108]: I1212 14:12:51.402302 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 12 14:12:51 crc kubenswrapper[5108]: I1212 14:12:51.410916 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 14:12:51 crc kubenswrapper[5108]: I1212 14:12:51.422034 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e9b5059-1b3e-4067-a63d-2952cbe863af" path="/var/lib/kubelet/pods/9e9b5059-1b3e-4067-a63d-2952cbe863af/volumes" Dec 12 14:12:51 crc kubenswrapper[5108]: I1212 14:12:51.504786 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-hdk9b"] Dec 12 14:12:51 crc kubenswrapper[5108]: W1212 14:12:51.661490 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6a9ae5f6_97bd_46ac_bafa_ca1b4452a141.slice/crio-6322785b8366bb5f83f8323119c3e4695c7b2c2823687fed3ee5b1b7a6546a77 WatchSource:0}: Error finding container 6322785b8366bb5f83f8323119c3e4695c7b2c2823687fed3ee5b1b7a6546a77: Status 404 returned error can't find the container with id 6322785b8366bb5f83f8323119c3e4695c7b2c2823687fed3ee5b1b7a6546a77 Dec 12 14:12:51 crc kubenswrapper[5108]: W1212 14:12:51.823688 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod17b87002_b798_480a_8e17_83053d698239.slice/crio-93efcc7ef33bc7137cf53fb0353163b4a5802207de4ae7a6a5a4cfdc8aa1f02c WatchSource:0}: Error finding container 93efcc7ef33bc7137cf53fb0353163b4a5802207de4ae7a6a5a4cfdc8aa1f02c: Status 404 returned error can't find the container with id 93efcc7ef33bc7137cf53fb0353163b4a5802207de4ae7a6a5a4cfdc8aa1f02c Dec 12 14:12:51 crc kubenswrapper[5108]: I1212 14:12:51.995682 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" event={"ID":"f863fff9-286a-45fa-b8f0-8a86994b8440","Type":"ContainerStarted","Data":"100746b97bdb13f43ab18572efb5cfa530eca5787dc6db4da231239479f734fe"} Dec 12 14:12:52 crc kubenswrapper[5108]: I1212 14:12:52.004188 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" event={"ID":"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141","Type":"ContainerStarted","Data":"6322785b8366bb5f83f8323119c3e4695c7b2c2823687fed3ee5b1b7a6546a77"} Dec 12 14:12:52 crc kubenswrapper[5108]: I1212 14:12:52.006933 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-hdk9b" event={"ID":"78c92fa7-6dbe-4fef-8495-6dc6fe162b22","Type":"ContainerStarted","Data":"a141fe4905aca3e69172259bd6c7f02624d2eff9c7543ae8c5ef11b14f87a467"} Dec 12 14:12:52 crc kubenswrapper[5108]: I1212 14:12:52.008456 5108 generic.go:358] "Generic (PLEG): container finished" podID="b05ea99a-815e-48ce-b4bb-1efda1405964" containerID="bbd3d08b160b28d5cae7a156960429912919026f4ad7698866d633631c7fe168" exitCode=0 Dec 12 14:12:52 crc kubenswrapper[5108]: I1212 14:12:52.008506 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bfb2d" event={"ID":"b05ea99a-815e-48ce-b4bb-1efda1405964","Type":"ContainerDied","Data":"bbd3d08b160b28d5cae7a156960429912919026f4ad7698866d633631c7fe168"} Dec 12 14:12:52 crc kubenswrapper[5108]: I1212 14:12:52.009463 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" event={"ID":"17b87002-b798-480a-8e17-83053d698239","Type":"ContainerStarted","Data":"93efcc7ef33bc7137cf53fb0353163b4a5802207de4ae7a6a5a4cfdc8aa1f02c"} Dec 12 14:12:52 crc kubenswrapper[5108]: I1212 14:12:52.012555 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"ddb7d99c-8f6c-4794-b9d8-83b248bae45f","Type":"ContainerStarted","Data":"112d05e4e0b8ac2d0bef1dd506034522589e18694d3d7fb0b0d9f42dcd73e436"} Dec 12 14:12:52 crc kubenswrapper[5108]: I1212 14:12:52.554716 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d8c95a75-0c3b-4caa-9b09-30c6dca73e72-metrics-certs\") pod \"network-metrics-daemon-p4g92\" (UID: \"d8c95a75-0c3b-4caa-9b09-30c6dca73e72\") " pod="openshift-multus/network-metrics-daemon-p4g92" Dec 12 14:12:52 crc kubenswrapper[5108]: I1212 14:12:52.562848 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d8c95a75-0c3b-4caa-9b09-30c6dca73e72-metrics-certs\") pod \"network-metrics-daemon-p4g92\" (UID: \"d8c95a75-0c3b-4caa-9b09-30c6dca73e72\") " pod="openshift-multus/network-metrics-daemon-p4g92" Dec 12 14:12:52 crc kubenswrapper[5108]: I1212 14:12:52.583795 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-t8n29\"" Dec 12 14:12:52 crc kubenswrapper[5108]: I1212 14:12:52.592819 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p4g92" Dec 12 14:12:52 crc kubenswrapper[5108]: I1212 14:12:52.842933 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-c8mpt" Dec 12 14:12:53 crc kubenswrapper[5108]: I1212 14:12:53.021474 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" event={"ID":"f863fff9-286a-45fa-b8f0-8a86994b8440","Type":"ContainerStarted","Data":"1bc444bc8d83e2be09878cbc2570da2dcf9c480a36f76b4a2128b7073771a098"} Dec 12 14:12:53 crc kubenswrapper[5108]: I1212 14:12:53.023173 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" event={"ID":"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141","Type":"ContainerStarted","Data":"f7c5af6add5e149f2d830f097b0bdd5ebdbefb926d10af7b6094349ed60b1fdb"} Dec 12 14:12:53 crc kubenswrapper[5108]: I1212 14:12:53.025497 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"1907d985-7075-4b2e-a55e-b0b009af5954","Type":"ContainerStarted","Data":"7e5bd6692b86d44ec0278ce14ce1ab7711139a135feeac4e1ccee55a30cd8b9a"} Dec 12 14:12:53 crc kubenswrapper[5108]: I1212 14:12:53.036870 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" event={"ID":"17b87002-b798-480a-8e17-83053d698239","Type":"ContainerStarted","Data":"5731a5e6a6fcd9f893a19f07aa4acf22db8debc06377bee1d417bbad2da36238"} Dec 12 14:12:53 crc kubenswrapper[5108]: I1212 14:12:53.037631 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 14:12:53 crc kubenswrapper[5108]: I1212 14:12:53.066524 5108 generic.go:358] "Generic (PLEG): container finished" podID="ddb7d99c-8f6c-4794-b9d8-83b248bae45f" containerID="112d05e4e0b8ac2d0bef1dd506034522589e18694d3d7fb0b0d9f42dcd73e436" exitCode=0 Dec 12 14:12:53 crc kubenswrapper[5108]: I1212 14:12:53.066693 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"ddb7d99c-8f6c-4794-b9d8-83b248bae45f","Type":"ContainerDied","Data":"112d05e4e0b8ac2d0bef1dd506034522589e18694d3d7fb0b0d9f42dcd73e436"} Dec 12 14:12:53 crc kubenswrapper[5108]: I1212 14:12:53.079010 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-gw5g9" event={"ID":"c37b74b9-0534-4df9-9af9-a1c10e3a9b89","Type":"ContainerStarted","Data":"a9c4fe3f8cb8c27819cad2158e94215d6bb56711abd94c43ac40a84d289fd5f9"} Dec 12 14:12:53 crc kubenswrapper[5108]: I1212 14:12:53.087757 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-11-crc" podStartSLOduration=4.087739563 podStartE2EDuration="4.087739563s" podCreationTimestamp="2025-12-12 14:12:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 14:12:53.085658287 +0000 UTC m=+125.993649446" watchObservedRunningTime="2025-12-12 14:12:53.087739563 +0000 UTC m=+125.995730742" Dec 12 14:12:53 crc kubenswrapper[5108]: I1212 14:12:53.110044 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-gw5g9" podStartSLOduration=20.110021741 podStartE2EDuration="20.110021741s" podCreationTimestamp="2025-12-12 14:12:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 14:12:53.106355362 +0000 UTC m=+126.014346521" watchObservedRunningTime="2025-12-12 14:12:53.110021741 +0000 UTC m=+126.018012900" Dec 12 14:12:54 crc kubenswrapper[5108]: I1212 14:12:54.086490 5108 generic.go:358] "Generic (PLEG): container finished" podID="1907d985-7075-4b2e-a55e-b0b009af5954" containerID="7e5bd6692b86d44ec0278ce14ce1ab7711139a135feeac4e1ccee55a30cd8b9a" exitCode=0 Dec 12 14:12:54 crc kubenswrapper[5108]: I1212 14:12:54.086593 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"1907d985-7075-4b2e-a55e-b0b009af5954","Type":"ContainerDied","Data":"7e5bd6692b86d44ec0278ce14ce1ab7711139a135feeac4e1ccee55a30cd8b9a"} Dec 12 14:12:54 crc kubenswrapper[5108]: I1212 14:12:54.088168 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-hdk9b" event={"ID":"78c92fa7-6dbe-4fef-8495-6dc6fe162b22","Type":"ContainerStarted","Data":"58720441a8c7f655a0c4ceb8b2d66a5315e2354e549c69ef1b43505d3e85ff43"} Dec 12 14:12:54 crc kubenswrapper[5108]: I1212 14:12:54.088704 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-image-registry/image-registry-66587d64c8-hdk9b" Dec 12 14:12:54 crc kubenswrapper[5108]: I1212 14:12:54.616515 5108 ???:1] "http: TLS handshake error from 192.168.126.11:35264: no serving certificate available for the kubelet" Dec 12 14:12:55 crc kubenswrapper[5108]: I1212 14:12:55.789399 5108 patch_prober.go:28] interesting pod/console-64d44f6ddf-np6kd container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.7:8443/health\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Dec 12 14:12:55 crc kubenswrapper[5108]: I1212 14:12:55.789452 5108 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-console/console-64d44f6ddf-np6kd" podUID="d2d38bed-cc7f-4c81-a918-78814a48a49f" containerName="console" probeResult="failure" output="Get \"https://10.217.0.7:8443/health\": dial tcp 10.217.0.7:8443: connect: connection refused" Dec 12 14:12:56 crc kubenswrapper[5108]: I1212 14:12:56.467028 5108 patch_prober.go:28] interesting pod/downloads-747b44746d-hrj8v container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.30:8080/\": dial tcp 10.217.0.30:8080: connect: connection refused" start-of-body= Dec 12 14:12:56 crc kubenswrapper[5108]: I1212 14:12:56.467345 5108 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-hrj8v" podUID="29c8dcea-f999-4e33-9f5e-ef9eb8a423f7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.30:8080/\": dial tcp 10.217.0.30:8080: connect: connection refused" Dec 12 14:12:57 crc kubenswrapper[5108]: I1212 14:12:57.898688 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-69wzc" Dec 12 14:12:57 crc kubenswrapper[5108]: I1212 14:12:57.938308 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66587d64c8-hdk9b" podStartSLOduration=110.938287386 podStartE2EDuration="1m50.938287386s" podCreationTimestamp="2025-12-12 14:11:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 14:12:54.181442102 +0000 UTC m=+127.089433271" watchObservedRunningTime="2025-12-12 14:12:57.938287386 +0000 UTC m=+130.846278545" Dec 12 14:12:59 crc kubenswrapper[5108]: E1212 14:12:59.301179 5108 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ab4cc74fd6f191761fb8e5e4a4df86e35a7ee6203fb2b8458d6522d41b931076" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 12 14:12:59 crc kubenswrapper[5108]: E1212 14:12:59.302652 5108 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ab4cc74fd6f191761fb8e5e4a4df86e35a7ee6203fb2b8458d6522d41b931076" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 12 14:12:59 crc kubenswrapper[5108]: E1212 14:12:59.303973 5108 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ab4cc74fd6f191761fb8e5e4a4df86e35a7ee6203fb2b8458d6522d41b931076" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 12 14:12:59 crc kubenswrapper[5108]: E1212 14:12:59.304021 5108 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-9bn92" podUID="fab2bff9-a63d-4213-b55b-c19d14831aa5" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Dec 12 14:12:59 crc kubenswrapper[5108]: I1212 14:12:59.664836 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-747b44746d-hrj8v" Dec 12 14:13:04 crc kubenswrapper[5108]: I1212 14:13:04.885134 5108 ???:1] "http: TLS handshake error from 192.168.126.11:53926: no serving certificate available for the kubelet" Dec 12 14:13:05 crc kubenswrapper[5108]: I1212 14:13:05.794109 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-64d44f6ddf-np6kd" Dec 12 14:13:05 crc kubenswrapper[5108]: I1212 14:13:05.798873 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-64d44f6ddf-np6kd" Dec 12 14:13:09 crc kubenswrapper[5108]: E1212 14:13:09.301981 5108 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ab4cc74fd6f191761fb8e5e4a4df86e35a7ee6203fb2b8458d6522d41b931076" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 12 14:13:09 crc kubenswrapper[5108]: E1212 14:13:09.303773 5108 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ab4cc74fd6f191761fb8e5e4a4df86e35a7ee6203fb2b8458d6522d41b931076" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 12 14:13:09 crc kubenswrapper[5108]: E1212 14:13:09.304814 5108 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ab4cc74fd6f191761fb8e5e4a4df86e35a7ee6203fb2b8458d6522d41b931076" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 12 14:13:09 crc kubenswrapper[5108]: E1212 14:13:09.304853 5108 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-9bn92" podUID="fab2bff9-a63d-4213-b55b-c19d14831aa5" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Dec 12 14:13:10 crc kubenswrapper[5108]: I1212 14:13:10.517000 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 12 14:13:10 crc kubenswrapper[5108]: I1212 14:13:10.586125 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 12 14:13:10 crc kubenswrapper[5108]: I1212 14:13:10.645117 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ddb7d99c-8f6c-4794-b9d8-83b248bae45f-kube-api-access\") pod \"ddb7d99c-8f6c-4794-b9d8-83b248bae45f\" (UID: \"ddb7d99c-8f6c-4794-b9d8-83b248bae45f\") " Dec 12 14:13:10 crc kubenswrapper[5108]: I1212 14:13:10.645212 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ddb7d99c-8f6c-4794-b9d8-83b248bae45f-kubelet-dir\") pod \"ddb7d99c-8f6c-4794-b9d8-83b248bae45f\" (UID: \"ddb7d99c-8f6c-4794-b9d8-83b248bae45f\") " Dec 12 14:13:10 crc kubenswrapper[5108]: I1212 14:13:10.645351 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ddb7d99c-8f6c-4794-b9d8-83b248bae45f-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "ddb7d99c-8f6c-4794-b9d8-83b248bae45f" (UID: "ddb7d99c-8f6c-4794-b9d8-83b248bae45f"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 14:13:10 crc kubenswrapper[5108]: I1212 14:13:10.645722 5108 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ddb7d99c-8f6c-4794-b9d8-83b248bae45f-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 12 14:13:10 crc kubenswrapper[5108]: I1212 14:13:10.656048 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ddb7d99c-8f6c-4794-b9d8-83b248bae45f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "ddb7d99c-8f6c-4794-b9d8-83b248bae45f" (UID: "ddb7d99c-8f6c-4794-b9d8-83b248bae45f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:13:10 crc kubenswrapper[5108]: I1212 14:13:10.746814 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1907d985-7075-4b2e-a55e-b0b009af5954-kubelet-dir\") pod \"1907d985-7075-4b2e-a55e-b0b009af5954\" (UID: \"1907d985-7075-4b2e-a55e-b0b009af5954\") " Dec 12 14:13:10 crc kubenswrapper[5108]: I1212 14:13:10.746889 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1907d985-7075-4b2e-a55e-b0b009af5954-kube-api-access\") pod \"1907d985-7075-4b2e-a55e-b0b009af5954\" (UID: \"1907d985-7075-4b2e-a55e-b0b009af5954\") " Dec 12 14:13:10 crc kubenswrapper[5108]: I1212 14:13:10.746963 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1907d985-7075-4b2e-a55e-b0b009af5954-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "1907d985-7075-4b2e-a55e-b0b009af5954" (UID: "1907d985-7075-4b2e-a55e-b0b009af5954"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 14:13:10 crc kubenswrapper[5108]: I1212 14:13:10.747233 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ddb7d99c-8f6c-4794-b9d8-83b248bae45f-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 12 14:13:10 crc kubenswrapper[5108]: I1212 14:13:10.747251 5108 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1907d985-7075-4b2e-a55e-b0b009af5954-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 12 14:13:10 crc kubenswrapper[5108]: I1212 14:13:10.753544 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1907d985-7075-4b2e-a55e-b0b009af5954-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1907d985-7075-4b2e-a55e-b0b009af5954" (UID: "1907d985-7075-4b2e-a55e-b0b009af5954"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:13:10 crc kubenswrapper[5108]: I1212 14:13:10.847965 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1907d985-7075-4b2e-a55e-b0b009af5954-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 12 14:13:10 crc kubenswrapper[5108]: I1212 14:13:10.848191 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-p4g92"] Dec 12 14:13:11 crc kubenswrapper[5108]: I1212 14:13:11.215058 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 12 14:13:11 crc kubenswrapper[5108]: I1212 14:13:11.215071 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"ddb7d99c-8f6c-4794-b9d8-83b248bae45f","Type":"ContainerDied","Data":"b1d60e247f02c43a0a4791b6d2e2e600e7388f1cf5ff9efb89eb6659fc160129"} Dec 12 14:13:11 crc kubenswrapper[5108]: I1212 14:13:11.215130 5108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b1d60e247f02c43a0a4791b6d2e2e600e7388f1cf5ff9efb89eb6659fc160129" Dec 12 14:13:11 crc kubenswrapper[5108]: I1212 14:13:11.216942 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"1907d985-7075-4b2e-a55e-b0b009af5954","Type":"ContainerDied","Data":"b55f32216b79cbb94ac43f20c575b41f8710c7bba9af6899c4ef55bc060b8c40"} Dec 12 14:13:11 crc kubenswrapper[5108]: I1212 14:13:11.216975 5108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b55f32216b79cbb94ac43f20c575b41f8710c7bba9af6899c4ef55bc060b8c40" Dec 12 14:13:11 crc kubenswrapper[5108]: I1212 14:13:11.217028 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 12 14:13:13 crc kubenswrapper[5108]: I1212 14:13:13.230267 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-9bn92_fab2bff9-a63d-4213-b55b-c19d14831aa5/kube-multus-additional-cni-plugins/0.log" Dec 12 14:13:13 crc kubenswrapper[5108]: I1212 14:13:13.230349 5108 generic.go:358] "Generic (PLEG): container finished" podID="fab2bff9-a63d-4213-b55b-c19d14831aa5" containerID="ab4cc74fd6f191761fb8e5e4a4df86e35a7ee6203fb2b8458d6522d41b931076" exitCode=137 Dec 12 14:13:13 crc kubenswrapper[5108]: I1212 14:13:13.230423 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-9bn92" event={"ID":"fab2bff9-a63d-4213-b55b-c19d14831aa5","Type":"ContainerDied","Data":"ab4cc74fd6f191761fb8e5e4a4df86e35a7ee6203fb2b8458d6522d41b931076"} Dec 12 14:13:14 crc kubenswrapper[5108]: W1212 14:13:14.559726 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd8c95a75_0c3b_4caa_9b09_30c6dca73e72.slice/crio-11693edd097416dc12808f463ab944ca5c612c342e8868a8ab643958611d5aa1 WatchSource:0}: Error finding container 11693edd097416dc12808f463ab944ca5c612c342e8868a8ab643958611d5aa1: Status 404 returned error can't find the container with id 11693edd097416dc12808f463ab944ca5c612c342e8868a8ab643958611d5aa1 Dec 12 14:13:14 crc kubenswrapper[5108]: I1212 14:13:14.660669 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-9bn92_fab2bff9-a63d-4213-b55b-c19d14831aa5/kube-multus-additional-cni-plugins/0.log" Dec 12 14:13:14 crc kubenswrapper[5108]: I1212 14:13:14.660743 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-9bn92" Dec 12 14:13:14 crc kubenswrapper[5108]: I1212 14:13:14.807505 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xvfzh\" (UniqueName: \"kubernetes.io/projected/fab2bff9-a63d-4213-b55b-c19d14831aa5-kube-api-access-xvfzh\") pod \"fab2bff9-a63d-4213-b55b-c19d14831aa5\" (UID: \"fab2bff9-a63d-4213-b55b-c19d14831aa5\") " Dec 12 14:13:14 crc kubenswrapper[5108]: I1212 14:13:14.807558 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/fab2bff9-a63d-4213-b55b-c19d14831aa5-cni-sysctl-allowlist\") pod \"fab2bff9-a63d-4213-b55b-c19d14831aa5\" (UID: \"fab2bff9-a63d-4213-b55b-c19d14831aa5\") " Dec 12 14:13:14 crc kubenswrapper[5108]: I1212 14:13:14.807639 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/fab2bff9-a63d-4213-b55b-c19d14831aa5-tuning-conf-dir\") pod \"fab2bff9-a63d-4213-b55b-c19d14831aa5\" (UID: \"fab2bff9-a63d-4213-b55b-c19d14831aa5\") " Dec 12 14:13:14 crc kubenswrapper[5108]: I1212 14:13:14.807670 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/fab2bff9-a63d-4213-b55b-c19d14831aa5-ready\") pod \"fab2bff9-a63d-4213-b55b-c19d14831aa5\" (UID: \"fab2bff9-a63d-4213-b55b-c19d14831aa5\") " Dec 12 14:13:14 crc kubenswrapper[5108]: I1212 14:13:14.808582 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fab2bff9-a63d-4213-b55b-c19d14831aa5-ready" (OuterVolumeSpecName: "ready") pod "fab2bff9-a63d-4213-b55b-c19d14831aa5" (UID: "fab2bff9-a63d-4213-b55b-c19d14831aa5"). InnerVolumeSpecName "ready". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:13:14 crc kubenswrapper[5108]: I1212 14:13:14.808624 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fab2bff9-a63d-4213-b55b-c19d14831aa5-tuning-conf-dir" (OuterVolumeSpecName: "tuning-conf-dir") pod "fab2bff9-a63d-4213-b55b-c19d14831aa5" (UID: "fab2bff9-a63d-4213-b55b-c19d14831aa5"). InnerVolumeSpecName "tuning-conf-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 14:13:14 crc kubenswrapper[5108]: I1212 14:13:14.809108 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fab2bff9-a63d-4213-b55b-c19d14831aa5-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "fab2bff9-a63d-4213-b55b-c19d14831aa5" (UID: "fab2bff9-a63d-4213-b55b-c19d14831aa5"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:13:14 crc kubenswrapper[5108]: I1212 14:13:14.820536 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fab2bff9-a63d-4213-b55b-c19d14831aa5-kube-api-access-xvfzh" (OuterVolumeSpecName: "kube-api-access-xvfzh") pod "fab2bff9-a63d-4213-b55b-c19d14831aa5" (UID: "fab2bff9-a63d-4213-b55b-c19d14831aa5"). InnerVolumeSpecName "kube-api-access-xvfzh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:13:14 crc kubenswrapper[5108]: I1212 14:13:14.861452 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-bjwlp" Dec 12 14:13:14 crc kubenswrapper[5108]: I1212 14:13:14.911738 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xvfzh\" (UniqueName: \"kubernetes.io/projected/fab2bff9-a63d-4213-b55b-c19d14831aa5-kube-api-access-xvfzh\") on node \"crc\" DevicePath \"\"" Dec 12 14:13:14 crc kubenswrapper[5108]: I1212 14:13:14.911766 5108 reconciler_common.go:299] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/fab2bff9-a63d-4213-b55b-c19d14831aa5-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Dec 12 14:13:14 crc kubenswrapper[5108]: I1212 14:13:14.911775 5108 reconciler_common.go:299] "Volume detached for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/fab2bff9-a63d-4213-b55b-c19d14831aa5-tuning-conf-dir\") on node \"crc\" DevicePath \"\"" Dec 12 14:13:14 crc kubenswrapper[5108]: I1212 14:13:14.911783 5108 reconciler_common.go:299] "Volume detached for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/fab2bff9-a63d-4213-b55b-c19d14831aa5-ready\") on node \"crc\" DevicePath \"\"" Dec 12 14:13:15 crc kubenswrapper[5108]: I1212 14:13:15.112513 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66587d64c8-hdk9b" Dec 12 14:13:15 crc kubenswrapper[5108]: I1212 14:13:15.312275 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-99vmq" event={"ID":"f963a2e4-7bac-4938-ba67-f65a48ac4806","Type":"ContainerStarted","Data":"fc083a67eac7493fd88b19b61ab3f61efd79f441d4f5d2b65e8bc224fbb5d38c"} Dec 12 14:13:15 crc kubenswrapper[5108]: I1212 14:13:15.315104 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-p4g92" event={"ID":"d8c95a75-0c3b-4caa-9b09-30c6dca73e72","Type":"ContainerStarted","Data":"11693edd097416dc12808f463ab944ca5c612c342e8868a8ab643958611d5aa1"} Dec 12 14:13:15 crc kubenswrapper[5108]: I1212 14:13:15.333275 5108 generic.go:358] "Generic (PLEG): container finished" podID="35559d5b-861e-4f71-b4fe-9cafa147f46b" containerID="9a52139b6b4a89f3b720753b38b4d8a0f1998e531cab1c0665136bbb3d120565" exitCode=0 Dec 12 14:13:15 crc kubenswrapper[5108]: I1212 14:13:15.333396 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fg9h4" event={"ID":"35559d5b-861e-4f71-b4fe-9cafa147f46b","Type":"ContainerDied","Data":"9a52139b6b4a89f3b720753b38b4d8a0f1998e531cab1c0665136bbb3d120565"} Dec 12 14:13:15 crc kubenswrapper[5108]: I1212 14:13:15.336493 5108 generic.go:358] "Generic (PLEG): container finished" podID="81f2fcb3-b42d-4bb0-9cb0-17ca67aaa401" containerID="9404b970ba792a167d91998c8cd73a2104eade0276681078660d8e6979618d7e" exitCode=0 Dec 12 14:13:15 crc kubenswrapper[5108]: I1212 14:13:15.336643 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ljwkh" event={"ID":"81f2fcb3-b42d-4bb0-9cb0-17ca67aaa401","Type":"ContainerDied","Data":"9404b970ba792a167d91998c8cd73a2104eade0276681078660d8e6979618d7e"} Dec 12 14:13:15 crc kubenswrapper[5108]: I1212 14:13:15.341148 5108 generic.go:358] "Generic (PLEG): container finished" podID="9d7df9af-f2f2-4ff9-a6be-f3b20aa4f91b" containerID="2f7dfd14205244f55a9b5f38a797ce202802b1ea0af66d0c30e0142afd2dfd25" exitCode=0 Dec 12 14:13:15 crc kubenswrapper[5108]: I1212 14:13:15.341184 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ddbtk" event={"ID":"9d7df9af-f2f2-4ff9-a6be-f3b20aa4f91b","Type":"ContainerDied","Data":"2f7dfd14205244f55a9b5f38a797ce202802b1ea0af66d0c30e0142afd2dfd25"} Dec 12 14:13:15 crc kubenswrapper[5108]: I1212 14:13:15.346631 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-9bn92_fab2bff9-a63d-4213-b55b-c19d14831aa5/kube-multus-additional-cni-plugins/0.log" Dec 12 14:13:15 crc kubenswrapper[5108]: I1212 14:13:15.346848 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-9bn92" Dec 12 14:13:15 crc kubenswrapper[5108]: I1212 14:13:15.352090 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-9bn92" event={"ID":"fab2bff9-a63d-4213-b55b-c19d14831aa5","Type":"ContainerDied","Data":"169237c0d573255b15ee0e04266328cb4a6cd2bbe9de8eaeb8a133e88e0bf90c"} Dec 12 14:13:15 crc kubenswrapper[5108]: I1212 14:13:15.352174 5108 scope.go:117] "RemoveContainer" containerID="ab4cc74fd6f191761fb8e5e4a4df86e35a7ee6203fb2b8458d6522d41b931076" Dec 12 14:13:15 crc kubenswrapper[5108]: I1212 14:13:15.367612 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zp6xr" event={"ID":"d9418022-cc61-44ab-99a4-4afbf84fad60","Type":"ContainerStarted","Data":"976883b10d39727ac5944d32a2a5ce03f1edb02d8c3ec524ae9d0173748bd850"} Dec 12 14:13:15 crc kubenswrapper[5108]: I1212 14:13:15.380688 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pdqkf" event={"ID":"4273b638-c6db-4a97-bbf1-e7390f6b555a","Type":"ContainerStarted","Data":"997d77dcf398cd37921c416b2fe39ceca8e88a03d0037dcd7faf08a2c0f49be2"} Dec 12 14:13:15 crc kubenswrapper[5108]: I1212 14:13:15.513822 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-9bn92"] Dec 12 14:13:15 crc kubenswrapper[5108]: I1212 14:13:15.517530 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-9bn92"] Dec 12 14:13:16 crc kubenswrapper[5108]: I1212 14:13:16.387296 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r4bxk" event={"ID":"aa7c7c10-bd3b-4044-8aa8-7875e9f908a7","Type":"ContainerStarted","Data":"3ca53f71bc480d3f6dd40a17e7ecbbe6ea7c29e6b8af26e19c34337f40d61514"} Dec 12 14:13:16 crc kubenswrapper[5108]: I1212 14:13:16.390841 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ddbtk" event={"ID":"9d7df9af-f2f2-4ff9-a6be-f3b20aa4f91b","Type":"ContainerStarted","Data":"b3361c9227c5e27461010ad1c5da9291ec9958a2e4ead2eac2a43aab3afbe747"} Dec 12 14:13:16 crc kubenswrapper[5108]: I1212 14:13:16.394549 5108 generic.go:358] "Generic (PLEG): container finished" podID="d9418022-cc61-44ab-99a4-4afbf84fad60" containerID="976883b10d39727ac5944d32a2a5ce03f1edb02d8c3ec524ae9d0173748bd850" exitCode=0 Dec 12 14:13:16 crc kubenswrapper[5108]: I1212 14:13:16.394622 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zp6xr" event={"ID":"d9418022-cc61-44ab-99a4-4afbf84fad60","Type":"ContainerDied","Data":"976883b10d39727ac5944d32a2a5ce03f1edb02d8c3ec524ae9d0173748bd850"} Dec 12 14:13:16 crc kubenswrapper[5108]: I1212 14:13:16.400649 5108 generic.go:358] "Generic (PLEG): container finished" podID="4273b638-c6db-4a97-bbf1-e7390f6b555a" containerID="997d77dcf398cd37921c416b2fe39ceca8e88a03d0037dcd7faf08a2c0f49be2" exitCode=0 Dec 12 14:13:16 crc kubenswrapper[5108]: I1212 14:13:16.400790 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pdqkf" event={"ID":"4273b638-c6db-4a97-bbf1-e7390f6b555a","Type":"ContainerDied","Data":"997d77dcf398cd37921c416b2fe39ceca8e88a03d0037dcd7faf08a2c0f49be2"} Dec 12 14:13:16 crc kubenswrapper[5108]: I1212 14:13:16.413685 5108 generic.go:358] "Generic (PLEG): container finished" podID="f963a2e4-7bac-4938-ba67-f65a48ac4806" containerID="fc083a67eac7493fd88b19b61ab3f61efd79f441d4f5d2b65e8bc224fbb5d38c" exitCode=0 Dec 12 14:13:16 crc kubenswrapper[5108]: I1212 14:13:16.413780 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-99vmq" event={"ID":"f963a2e4-7bac-4938-ba67-f65a48ac4806","Type":"ContainerDied","Data":"fc083a67eac7493fd88b19b61ab3f61efd79f441d4f5d2b65e8bc224fbb5d38c"} Dec 12 14:13:16 crc kubenswrapper[5108]: I1212 14:13:16.416440 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bfb2d" event={"ID":"b05ea99a-815e-48ce-b4bb-1efda1405964","Type":"ContainerStarted","Data":"d0cb8d3decd5ecbf498a8b6574a4a5d0b4339e871b354faf50eb562249ce04e9"} Dec 12 14:13:16 crc kubenswrapper[5108]: I1212 14:13:16.417952 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-p4g92" event={"ID":"d8c95a75-0c3b-4caa-9b09-30c6dca73e72","Type":"ContainerStarted","Data":"44cfb69bb12f18f8280bb66e0063620901ab121a1e44a569d4b7b9b602b6a3c8"} Dec 12 14:13:16 crc kubenswrapper[5108]: I1212 14:13:16.417982 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-p4g92" event={"ID":"d8c95a75-0c3b-4caa-9b09-30c6dca73e72","Type":"ContainerStarted","Data":"d65301a077ca25b35066a577dab0d418e14626a37f70b167d4d7f8a3cfbe49b2"} Dec 12 14:13:16 crc kubenswrapper[5108]: I1212 14:13:16.438181 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fg9h4" event={"ID":"35559d5b-861e-4f71-b4fe-9cafa147f46b","Type":"ContainerStarted","Data":"e9caf7c47d887ac7bfb7a1315292f0be164ca2255932db9a3edae12b4b6e8a24"} Dec 12 14:13:16 crc kubenswrapper[5108]: I1212 14:13:16.440988 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ljwkh" event={"ID":"81f2fcb3-b42d-4bb0-9cb0-17ca67aaa401","Type":"ContainerStarted","Data":"5d6896569753e9473980bdb514aa1b002b00c3dfaf1dc1b61634ff5b83440d5b"} Dec 12 14:13:16 crc kubenswrapper[5108]: I1212 14:13:16.447697 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-p4g92" podStartSLOduration=129.447681366 podStartE2EDuration="2m9.447681366s" podCreationTimestamp="2025-12-12 14:11:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 14:13:16.446889855 +0000 UTC m=+149.354881034" watchObservedRunningTime="2025-12-12 14:13:16.447681366 +0000 UTC m=+149.355672535" Dec 12 14:13:16 crc kubenswrapper[5108]: I1212 14:13:16.480628 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-ddbtk" podStartSLOduration=7.843166913 podStartE2EDuration="30.480611666s" podCreationTimestamp="2025-12-12 14:12:46 +0000 UTC" firstStartedPulling="2025-12-12 14:12:47.915971079 +0000 UTC m=+120.823962238" lastFinishedPulling="2025-12-12 14:13:10.553415842 +0000 UTC m=+143.461406991" observedRunningTime="2025-12-12 14:13:16.479374274 +0000 UTC m=+149.387365453" watchObservedRunningTime="2025-12-12 14:13:16.480611666 +0000 UTC m=+149.388602835" Dec 12 14:13:16 crc kubenswrapper[5108]: I1212 14:13:16.605826 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-ljwkh" podStartSLOduration=5.925318228 podStartE2EDuration="30.605812178s" podCreationTimestamp="2025-12-12 14:12:46 +0000 UTC" firstStartedPulling="2025-12-12 14:12:49.942699616 +0000 UTC m=+122.850690775" lastFinishedPulling="2025-12-12 14:13:14.623193516 +0000 UTC m=+147.531184725" observedRunningTime="2025-12-12 14:13:16.600473658 +0000 UTC m=+149.508464837" watchObservedRunningTime="2025-12-12 14:13:16.605812178 +0000 UTC m=+149.513803337" Dec 12 14:13:16 crc kubenswrapper[5108]: I1212 14:13:16.742285 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-ddbtk" Dec 12 14:13:16 crc kubenswrapper[5108]: I1212 14:13:16.742329 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-ddbtk" Dec 12 14:13:17 crc kubenswrapper[5108]: I1212 14:13:17.415825 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fab2bff9-a63d-4213-b55b-c19d14831aa5" path="/var/lib/kubelet/pods/fab2bff9-a63d-4213-b55b-c19d14831aa5/volumes" Dec 12 14:13:17 crc kubenswrapper[5108]: I1212 14:13:17.450689 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zp6xr" event={"ID":"d9418022-cc61-44ab-99a4-4afbf84fad60","Type":"ContainerStarted","Data":"722e6ee2eed449c1b2b31a807a38ce498bcd4e8d4901f3bd615761efc0f6af4d"} Dec 12 14:13:17 crc kubenswrapper[5108]: I1212 14:13:17.452848 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pdqkf" event={"ID":"4273b638-c6db-4a97-bbf1-e7390f6b555a","Type":"ContainerStarted","Data":"829ac3a49873def3e3e5663f5fbd645b5146aff1b9132b17f5442a6e3b0f0e92"} Dec 12 14:13:17 crc kubenswrapper[5108]: I1212 14:13:17.455513 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-99vmq" event={"ID":"f963a2e4-7bac-4938-ba67-f65a48ac4806","Type":"ContainerStarted","Data":"163f3d453023834c4a791e6d5adf23994745596c3991639c936be6dd47b3323a"} Dec 12 14:13:17 crc kubenswrapper[5108]: I1212 14:13:17.457206 5108 generic.go:358] "Generic (PLEG): container finished" podID="b05ea99a-815e-48ce-b4bb-1efda1405964" containerID="d0cb8d3decd5ecbf498a8b6574a4a5d0b4339e871b354faf50eb562249ce04e9" exitCode=0 Dec 12 14:13:17 crc kubenswrapper[5108]: I1212 14:13:17.457335 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bfb2d" event={"ID":"b05ea99a-815e-48ce-b4bb-1efda1405964","Type":"ContainerDied","Data":"d0cb8d3decd5ecbf498a8b6574a4a5d0b4339e871b354faf50eb562249ce04e9"} Dec 12 14:13:17 crc kubenswrapper[5108]: I1212 14:13:17.488953 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-fg9h4" podStartSLOduration=10.131831771 podStartE2EDuration="33.488934742s" podCreationTimestamp="2025-12-12 14:12:44 +0000 UTC" firstStartedPulling="2025-12-12 14:12:47.203451808 +0000 UTC m=+120.111442967" lastFinishedPulling="2025-12-12 14:13:10.560554779 +0000 UTC m=+143.468545938" observedRunningTime="2025-12-12 14:13:16.626072676 +0000 UTC m=+149.534063835" watchObservedRunningTime="2025-12-12 14:13:17.488934742 +0000 UTC m=+150.396925901" Dec 12 14:13:17 crc kubenswrapper[5108]: I1212 14:13:17.490590 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-zp6xr" podStartSLOduration=5.781727307 podStartE2EDuration="32.490584505s" podCreationTimestamp="2025-12-12 14:12:45 +0000 UTC" firstStartedPulling="2025-12-12 14:12:47.918984329 +0000 UTC m=+120.826975488" lastFinishedPulling="2025-12-12 14:13:14.627841527 +0000 UTC m=+147.535832686" observedRunningTime="2025-12-12 14:13:17.488370467 +0000 UTC m=+150.396361646" watchObservedRunningTime="2025-12-12 14:13:17.490584505 +0000 UTC m=+150.398575664" Dec 12 14:13:17 crc kubenswrapper[5108]: I1212 14:13:17.517165 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-ljwkh" Dec 12 14:13:17 crc kubenswrapper[5108]: I1212 14:13:17.517254 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-ljwkh" Dec 12 14:13:17 crc kubenswrapper[5108]: I1212 14:13:17.520310 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-pdqkf" podStartSLOduration=5.055486067 podStartE2EDuration="32.520290341s" podCreationTimestamp="2025-12-12 14:12:45 +0000 UTC" firstStartedPulling="2025-12-12 14:12:47.203922521 +0000 UTC m=+120.111913700" lastFinishedPulling="2025-12-12 14:13:14.668726825 +0000 UTC m=+147.576717974" observedRunningTime="2025-12-12 14:13:17.518640138 +0000 UTC m=+150.426631307" watchObservedRunningTime="2025-12-12 14:13:17.520290341 +0000 UTC m=+150.428281500" Dec 12 14:13:17 crc kubenswrapper[5108]: I1212 14:13:17.558662 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-99vmq" podStartSLOduration=10.165740147 podStartE2EDuration="33.558644273s" podCreationTimestamp="2025-12-12 14:12:44 +0000 UTC" firstStartedPulling="2025-12-12 14:12:47.204413224 +0000 UTC m=+120.112404403" lastFinishedPulling="2025-12-12 14:13:10.59731737 +0000 UTC m=+143.505308529" observedRunningTime="2025-12-12 14:13:17.554278079 +0000 UTC m=+150.462269238" watchObservedRunningTime="2025-12-12 14:13:17.558644273 +0000 UTC m=+150.466635432" Dec 12 14:13:17 crc kubenswrapper[5108]: I1212 14:13:17.911632 5108 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-ddbtk" podUID="9d7df9af-f2f2-4ff9-a6be-f3b20aa4f91b" containerName="registry-server" probeResult="failure" output=< Dec 12 14:13:17 crc kubenswrapper[5108]: timeout: failed to connect service ":50051" within 1s Dec 12 14:13:17 crc kubenswrapper[5108]: > Dec 12 14:13:18 crc kubenswrapper[5108]: I1212 14:13:18.468065 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bfb2d" event={"ID":"b05ea99a-815e-48ce-b4bb-1efda1405964","Type":"ContainerStarted","Data":"e42522eda8b1a23c955d0531f95ea2f76d098d4393146dd84878a438d3bbc73f"} Dec 12 14:13:18 crc kubenswrapper[5108]: I1212 14:13:18.470659 5108 generic.go:358] "Generic (PLEG): container finished" podID="aa7c7c10-bd3b-4044-8aa8-7875e9f908a7" containerID="3ca53f71bc480d3f6dd40a17e7ecbbe6ea7c29e6b8af26e19c34337f40d61514" exitCode=0 Dec 12 14:13:18 crc kubenswrapper[5108]: I1212 14:13:18.471172 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r4bxk" event={"ID":"aa7c7c10-bd3b-4044-8aa8-7875e9f908a7","Type":"ContainerDied","Data":"3ca53f71bc480d3f6dd40a17e7ecbbe6ea7c29e6b8af26e19c34337f40d61514"} Dec 12 14:13:18 crc kubenswrapper[5108]: I1212 14:13:18.493917 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-bfb2d" podStartSLOduration=8.755821041 podStartE2EDuration="31.493897259s" podCreationTimestamp="2025-12-12 14:12:47 +0000 UTC" firstStartedPulling="2025-12-12 14:12:52.00918886 +0000 UTC m=+124.917180019" lastFinishedPulling="2025-12-12 14:13:14.747265078 +0000 UTC m=+147.655256237" observedRunningTime="2025-12-12 14:13:18.492693278 +0000 UTC m=+151.400684457" watchObservedRunningTime="2025-12-12 14:13:18.493897259 +0000 UTC m=+151.401888428" Dec 12 14:13:18 crc kubenswrapper[5108]: I1212 14:13:18.586373 5108 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-ljwkh" podUID="81f2fcb3-b42d-4bb0-9cb0-17ca67aaa401" containerName="registry-server" probeResult="failure" output=< Dec 12 14:13:18 crc kubenswrapper[5108]: timeout: failed to connect service ":50051" within 1s Dec 12 14:13:18 crc kubenswrapper[5108]: > Dec 12 14:13:19 crc kubenswrapper[5108]: I1212 14:13:19.477651 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r4bxk" event={"ID":"aa7c7c10-bd3b-4044-8aa8-7875e9f908a7","Type":"ContainerStarted","Data":"38b81aab5874b22722b0bb3fc335bcd386214c324ca0189dc5e84e5804e22cf9"} Dec 12 14:13:19 crc kubenswrapper[5108]: I1212 14:13:19.502933 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-r4bxk" podStartSLOduration=8.826621622 podStartE2EDuration="31.502913642s" podCreationTimestamp="2025-12-12 14:12:48 +0000 UTC" firstStartedPulling="2025-12-12 14:12:52.013447954 +0000 UTC m=+124.921439113" lastFinishedPulling="2025-12-12 14:13:14.689739974 +0000 UTC m=+147.597731133" observedRunningTime="2025-12-12 14:13:19.502772359 +0000 UTC m=+152.410763538" watchObservedRunningTime="2025-12-12 14:13:19.502913642 +0000 UTC m=+152.410904801" Dec 12 14:13:19 crc kubenswrapper[5108]: I1212 14:13:19.511979 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-r4bxk" Dec 12 14:13:19 crc kubenswrapper[5108]: I1212 14:13:19.512031 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-r4bxk" Dec 12 14:13:19 crc kubenswrapper[5108]: I1212 14:13:19.695256 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-bfb2d" Dec 12 14:13:19 crc kubenswrapper[5108]: I1212 14:13:19.695306 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-bfb2d" Dec 12 14:13:20 crc kubenswrapper[5108]: I1212 14:13:20.550221 5108 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-r4bxk" podUID="aa7c7c10-bd3b-4044-8aa8-7875e9f908a7" containerName="registry-server" probeResult="failure" output=< Dec 12 14:13:20 crc kubenswrapper[5108]: timeout: failed to connect service ":50051" within 1s Dec 12 14:13:20 crc kubenswrapper[5108]: > Dec 12 14:13:20 crc kubenswrapper[5108]: I1212 14:13:20.741478 5108 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-bfb2d" podUID="b05ea99a-815e-48ce-b4bb-1efda1405964" containerName="registry-server" probeResult="failure" output=< Dec 12 14:13:20 crc kubenswrapper[5108]: timeout: failed to connect service ":50051" within 1s Dec 12 14:13:20 crc kubenswrapper[5108]: > Dec 12 14:13:23 crc kubenswrapper[5108]: I1212 14:13:23.885976 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Dec 12 14:13:23 crc kubenswrapper[5108]: I1212 14:13:23.887034 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="fab2bff9-a63d-4213-b55b-c19d14831aa5" containerName="kube-multus-additional-cni-plugins" Dec 12 14:13:23 crc kubenswrapper[5108]: I1212 14:13:23.887052 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="fab2bff9-a63d-4213-b55b-c19d14831aa5" containerName="kube-multus-additional-cni-plugins" Dec 12 14:13:23 crc kubenswrapper[5108]: I1212 14:13:23.887123 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ddb7d99c-8f6c-4794-b9d8-83b248bae45f" containerName="pruner" Dec 12 14:13:23 crc kubenswrapper[5108]: I1212 14:13:23.887132 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="ddb7d99c-8f6c-4794-b9d8-83b248bae45f" containerName="pruner" Dec 12 14:13:23 crc kubenswrapper[5108]: I1212 14:13:23.887144 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1907d985-7075-4b2e-a55e-b0b009af5954" containerName="pruner" Dec 12 14:13:23 crc kubenswrapper[5108]: I1212 14:13:23.887151 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="1907d985-7075-4b2e-a55e-b0b009af5954" containerName="pruner" Dec 12 14:13:23 crc kubenswrapper[5108]: I1212 14:13:23.887309 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="ddb7d99c-8f6c-4794-b9d8-83b248bae45f" containerName="pruner" Dec 12 14:13:23 crc kubenswrapper[5108]: I1212 14:13:23.887333 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="1907d985-7075-4b2e-a55e-b0b009af5954" containerName="pruner" Dec 12 14:13:23 crc kubenswrapper[5108]: I1212 14:13:23.887371 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="fab2bff9-a63d-4213-b55b-c19d14831aa5" containerName="kube-multus-additional-cni-plugins" Dec 12 14:13:24 crc kubenswrapper[5108]: I1212 14:13:24.322329 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Dec 12 14:13:24 crc kubenswrapper[5108]: I1212 14:13:24.322602 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 12 14:13:24 crc kubenswrapper[5108]: I1212 14:13:24.325028 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver\"/\"installer-sa-dockercfg-bqqnb\"" Dec 12 14:13:24 crc kubenswrapper[5108]: I1212 14:13:24.325895 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver\"/\"kube-root-ca.crt\"" Dec 12 14:13:24 crc kubenswrapper[5108]: I1212 14:13:24.504730 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4ea0244d-7268-460d-9943-a509ac061dbd-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"4ea0244d-7268-460d-9943-a509ac061dbd\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 12 14:13:24 crc kubenswrapper[5108]: I1212 14:13:24.504790 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4ea0244d-7268-460d-9943-a509ac061dbd-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"4ea0244d-7268-460d-9943-a509ac061dbd\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 12 14:13:24 crc kubenswrapper[5108]: I1212 14:13:24.606417 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4ea0244d-7268-460d-9943-a509ac061dbd-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"4ea0244d-7268-460d-9943-a509ac061dbd\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 12 14:13:24 crc kubenswrapper[5108]: I1212 14:13:24.606474 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4ea0244d-7268-460d-9943-a509ac061dbd-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"4ea0244d-7268-460d-9943-a509ac061dbd\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 12 14:13:24 crc kubenswrapper[5108]: I1212 14:13:24.606597 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4ea0244d-7268-460d-9943-a509ac061dbd-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"4ea0244d-7268-460d-9943-a509ac061dbd\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 12 14:13:24 crc kubenswrapper[5108]: I1212 14:13:24.637809 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4ea0244d-7268-460d-9943-a509ac061dbd-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"4ea0244d-7268-460d-9943-a509ac061dbd\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 12 14:13:24 crc kubenswrapper[5108]: I1212 14:13:24.639902 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 12 14:13:24 crc kubenswrapper[5108]: I1212 14:13:24.969630 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-fg9h4" Dec 12 14:13:24 crc kubenswrapper[5108]: I1212 14:13:24.970030 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-fg9h4" Dec 12 14:13:25 crc kubenswrapper[5108]: I1212 14:13:25.119898 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Dec 12 14:13:25 crc kubenswrapper[5108]: W1212 14:13:25.131776 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod4ea0244d_7268_460d_9943_a509ac061dbd.slice/crio-02dea8e66df40d1d9a3619ba768d3281767961280bf3b0d3a45c983b212c6682 WatchSource:0}: Error finding container 02dea8e66df40d1d9a3619ba768d3281767961280bf3b0d3a45c983b212c6682: Status 404 returned error can't find the container with id 02dea8e66df40d1d9a3619ba768d3281767961280bf3b0d3a45c983b212c6682 Dec 12 14:13:25 crc kubenswrapper[5108]: I1212 14:13:25.301791 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 14:13:25 crc kubenswrapper[5108]: I1212 14:13:25.308095 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-fg9h4" Dec 12 14:13:25 crc kubenswrapper[5108]: I1212 14:13:25.382993 5108 ???:1] "http: TLS handshake error from 192.168.126.11:47408: no serving certificate available for the kubelet" Dec 12 14:13:25 crc kubenswrapper[5108]: I1212 14:13:25.512638 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"4ea0244d-7268-460d-9943-a509ac061dbd","Type":"ContainerStarted","Data":"02dea8e66df40d1d9a3619ba768d3281767961280bf3b0d3a45c983b212c6682"} Dec 12 14:13:25 crc kubenswrapper[5108]: I1212 14:13:25.595439 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-fg9h4" Dec 12 14:13:25 crc kubenswrapper[5108]: I1212 14:13:25.604218 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-99vmq" Dec 12 14:13:25 crc kubenswrapper[5108]: I1212 14:13:25.604271 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-99vmq" Dec 12 14:13:25 crc kubenswrapper[5108]: I1212 14:13:25.652604 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-99vmq" Dec 12 14:13:25 crc kubenswrapper[5108]: I1212 14:13:25.734675 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-pdqkf" Dec 12 14:13:25 crc kubenswrapper[5108]: I1212 14:13:25.734733 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-pdqkf" Dec 12 14:13:25 crc kubenswrapper[5108]: I1212 14:13:25.778399 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-pdqkf" Dec 12 14:13:25 crc kubenswrapper[5108]: I1212 14:13:25.914940 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-zp6xr" Dec 12 14:13:25 crc kubenswrapper[5108]: I1212 14:13:25.915355 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-zp6xr" Dec 12 14:13:25 crc kubenswrapper[5108]: I1212 14:13:25.997161 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-zp6xr" Dec 12 14:13:26 crc kubenswrapper[5108]: I1212 14:13:26.518715 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"4ea0244d-7268-460d-9943-a509ac061dbd","Type":"ContainerStarted","Data":"d31c6754df58d16d68b09e22782be605bafdf1f888f6fa97215c06e1fe9dda9a"} Dec 12 14:13:26 crc kubenswrapper[5108]: I1212 14:13:26.557826 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-zp6xr" Dec 12 14:13:26 crc kubenswrapper[5108]: I1212 14:13:26.577504 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-99vmq" Dec 12 14:13:26 crc kubenswrapper[5108]: I1212 14:13:26.577638 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-pdqkf" Dec 12 14:13:26 crc kubenswrapper[5108]: I1212 14:13:26.774965 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-ddbtk" Dec 12 14:13:26 crc kubenswrapper[5108]: I1212 14:13:26.810000 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-ddbtk" Dec 12 14:13:27 crc kubenswrapper[5108]: I1212 14:13:27.158784 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zp6xr"] Dec 12 14:13:27 crc kubenswrapper[5108]: I1212 14:13:27.524481 5108 generic.go:358] "Generic (PLEG): container finished" podID="4ea0244d-7268-460d-9943-a509ac061dbd" containerID="d31c6754df58d16d68b09e22782be605bafdf1f888f6fa97215c06e1fe9dda9a" exitCode=0 Dec 12 14:13:27 crc kubenswrapper[5108]: I1212 14:13:27.524655 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"4ea0244d-7268-460d-9943-a509ac061dbd","Type":"ContainerDied","Data":"d31c6754df58d16d68b09e22782be605bafdf1f888f6fa97215c06e1fe9dda9a"} Dec 12 14:13:27 crc kubenswrapper[5108]: I1212 14:13:27.562124 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-ljwkh" Dec 12 14:13:27 crc kubenswrapper[5108]: I1212 14:13:27.606145 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-ljwkh" Dec 12 14:13:28 crc kubenswrapper[5108]: I1212 14:13:28.156692 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pdqkf"] Dec 12 14:13:28 crc kubenswrapper[5108]: I1212 14:13:28.531478 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-pdqkf" podUID="4273b638-c6db-4a97-bbf1-e7390f6b555a" containerName="registry-server" containerID="cri-o://829ac3a49873def3e3e5663f5fbd645b5146aff1b9132b17f5442a6e3b0f0e92" gracePeriod=2 Dec 12 14:13:28 crc kubenswrapper[5108]: I1212 14:13:28.533169 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-zp6xr" podUID="d9418022-cc61-44ab-99a4-4afbf84fad60" containerName="registry-server" containerID="cri-o://722e6ee2eed449c1b2b31a807a38ce498bcd4e8d4901f3bd615761efc0f6af4d" gracePeriod=2 Dec 12 14:13:28 crc kubenswrapper[5108]: I1212 14:13:28.726526 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 12 14:13:28 crc kubenswrapper[5108]: I1212 14:13:28.778444 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4ea0244d-7268-460d-9943-a509ac061dbd-kubelet-dir\") pod \"4ea0244d-7268-460d-9943-a509ac061dbd\" (UID: \"4ea0244d-7268-460d-9943-a509ac061dbd\") " Dec 12 14:13:28 crc kubenswrapper[5108]: I1212 14:13:28.778542 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ea0244d-7268-460d-9943-a509ac061dbd-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "4ea0244d-7268-460d-9943-a509ac061dbd" (UID: "4ea0244d-7268-460d-9943-a509ac061dbd"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 14:13:28 crc kubenswrapper[5108]: I1212 14:13:28.778586 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4ea0244d-7268-460d-9943-a509ac061dbd-kube-api-access\") pod \"4ea0244d-7268-460d-9943-a509ac061dbd\" (UID: \"4ea0244d-7268-460d-9943-a509ac061dbd\") " Dec 12 14:13:28 crc kubenswrapper[5108]: I1212 14:13:28.778882 5108 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4ea0244d-7268-460d-9943-a509ac061dbd-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 12 14:13:28 crc kubenswrapper[5108]: I1212 14:13:28.788327 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ea0244d-7268-460d-9943-a509ac061dbd-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "4ea0244d-7268-460d-9943-a509ac061dbd" (UID: "4ea0244d-7268-460d-9943-a509ac061dbd"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:13:28 crc kubenswrapper[5108]: I1212 14:13:28.879749 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4ea0244d-7268-460d-9943-a509ac061dbd-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 12 14:13:28 crc kubenswrapper[5108]: I1212 14:13:28.887953 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zp6xr" Dec 12 14:13:28 crc kubenswrapper[5108]: I1212 14:13:28.919068 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pdqkf" Dec 12 14:13:28 crc kubenswrapper[5108]: I1212 14:13:28.981120 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9418022-cc61-44ab-99a4-4afbf84fad60-utilities\") pod \"d9418022-cc61-44ab-99a4-4afbf84fad60\" (UID: \"d9418022-cc61-44ab-99a4-4afbf84fad60\") " Dec 12 14:13:28 crc kubenswrapper[5108]: I1212 14:13:28.981264 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4273b638-c6db-4a97-bbf1-e7390f6b555a-utilities\") pod \"4273b638-c6db-4a97-bbf1-e7390f6b555a\" (UID: \"4273b638-c6db-4a97-bbf1-e7390f6b555a\") " Dec 12 14:13:28 crc kubenswrapper[5108]: I1212 14:13:28.982036 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d9418022-cc61-44ab-99a4-4afbf84fad60-utilities" (OuterVolumeSpecName: "utilities") pod "d9418022-cc61-44ab-99a4-4afbf84fad60" (UID: "d9418022-cc61-44ab-99a4-4afbf84fad60"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:13:28 crc kubenswrapper[5108]: I1212 14:13:28.983061 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4273b638-c6db-4a97-bbf1-e7390f6b555a-utilities" (OuterVolumeSpecName: "utilities") pod "4273b638-c6db-4a97-bbf1-e7390f6b555a" (UID: "4273b638-c6db-4a97-bbf1-e7390f6b555a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:13:28 crc kubenswrapper[5108]: I1212 14:13:28.983266 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4273b638-c6db-4a97-bbf1-e7390f6b555a-catalog-content\") pod \"4273b638-c6db-4a97-bbf1-e7390f6b555a\" (UID: \"4273b638-c6db-4a97-bbf1-e7390f6b555a\") " Dec 12 14:13:28 crc kubenswrapper[5108]: I1212 14:13:28.983404 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9418022-cc61-44ab-99a4-4afbf84fad60-catalog-content\") pod \"d9418022-cc61-44ab-99a4-4afbf84fad60\" (UID: \"d9418022-cc61-44ab-99a4-4afbf84fad60\") " Dec 12 14:13:28 crc kubenswrapper[5108]: I1212 14:13:28.983447 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cdddn\" (UniqueName: \"kubernetes.io/projected/4273b638-c6db-4a97-bbf1-e7390f6b555a-kube-api-access-cdddn\") pod \"4273b638-c6db-4a97-bbf1-e7390f6b555a\" (UID: \"4273b638-c6db-4a97-bbf1-e7390f6b555a\") " Dec 12 14:13:28 crc kubenswrapper[5108]: I1212 14:13:28.983488 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4bxlk\" (UniqueName: \"kubernetes.io/projected/d9418022-cc61-44ab-99a4-4afbf84fad60-kube-api-access-4bxlk\") pod \"d9418022-cc61-44ab-99a4-4afbf84fad60\" (UID: \"d9418022-cc61-44ab-99a4-4afbf84fad60\") " Dec 12 14:13:28 crc kubenswrapper[5108]: I1212 14:13:28.983872 5108 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9418022-cc61-44ab-99a4-4afbf84fad60-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 14:13:28 crc kubenswrapper[5108]: I1212 14:13:28.983904 5108 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4273b638-c6db-4a97-bbf1-e7390f6b555a-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 14:13:28 crc kubenswrapper[5108]: I1212 14:13:28.989897 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4273b638-c6db-4a97-bbf1-e7390f6b555a-kube-api-access-cdddn" (OuterVolumeSpecName: "kube-api-access-cdddn") pod "4273b638-c6db-4a97-bbf1-e7390f6b555a" (UID: "4273b638-c6db-4a97-bbf1-e7390f6b555a"). InnerVolumeSpecName "kube-api-access-cdddn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:13:28 crc kubenswrapper[5108]: I1212 14:13:28.995487 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9418022-cc61-44ab-99a4-4afbf84fad60-kube-api-access-4bxlk" (OuterVolumeSpecName: "kube-api-access-4bxlk") pod "d9418022-cc61-44ab-99a4-4afbf84fad60" (UID: "d9418022-cc61-44ab-99a4-4afbf84fad60"). InnerVolumeSpecName "kube-api-access-4bxlk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:13:29 crc kubenswrapper[5108]: I1212 14:13:29.017060 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4273b638-c6db-4a97-bbf1-e7390f6b555a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4273b638-c6db-4a97-bbf1-e7390f6b555a" (UID: "4273b638-c6db-4a97-bbf1-e7390f6b555a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:13:29 crc kubenswrapper[5108]: I1212 14:13:29.051567 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d9418022-cc61-44ab-99a4-4afbf84fad60-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d9418022-cc61-44ab-99a4-4afbf84fad60" (UID: "d9418022-cc61-44ab-99a4-4afbf84fad60"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:13:29 crc kubenswrapper[5108]: I1212 14:13:29.085600 5108 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4273b638-c6db-4a97-bbf1-e7390f6b555a-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 14:13:29 crc kubenswrapper[5108]: I1212 14:13:29.085642 5108 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9418022-cc61-44ab-99a4-4afbf84fad60-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 14:13:29 crc kubenswrapper[5108]: I1212 14:13:29.085652 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-cdddn\" (UniqueName: \"kubernetes.io/projected/4273b638-c6db-4a97-bbf1-e7390f6b555a-kube-api-access-cdddn\") on node \"crc\" DevicePath \"\"" Dec 12 14:13:29 crc kubenswrapper[5108]: I1212 14:13:29.085662 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4bxlk\" (UniqueName: \"kubernetes.io/projected/d9418022-cc61-44ab-99a4-4afbf84fad60-kube-api-access-4bxlk\") on node \"crc\" DevicePath \"\"" Dec 12 14:13:29 crc kubenswrapper[5108]: I1212 14:13:29.538528 5108 generic.go:358] "Generic (PLEG): container finished" podID="d9418022-cc61-44ab-99a4-4afbf84fad60" containerID="722e6ee2eed449c1b2b31a807a38ce498bcd4e8d4901f3bd615761efc0f6af4d" exitCode=0 Dec 12 14:13:29 crc kubenswrapper[5108]: I1212 14:13:29.539019 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zp6xr" Dec 12 14:13:29 crc kubenswrapper[5108]: I1212 14:13:29.539190 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zp6xr" event={"ID":"d9418022-cc61-44ab-99a4-4afbf84fad60","Type":"ContainerDied","Data":"722e6ee2eed449c1b2b31a807a38ce498bcd4e8d4901f3bd615761efc0f6af4d"} Dec 12 14:13:29 crc kubenswrapper[5108]: I1212 14:13:29.539267 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zp6xr" event={"ID":"d9418022-cc61-44ab-99a4-4afbf84fad60","Type":"ContainerDied","Data":"711d77d1affee574ef330cb08e0e5686c9677197d5b2c4d83d1f2621d35c9663"} Dec 12 14:13:29 crc kubenswrapper[5108]: I1212 14:13:29.539291 5108 scope.go:117] "RemoveContainer" containerID="722e6ee2eed449c1b2b31a807a38ce498bcd4e8d4901f3bd615761efc0f6af4d" Dec 12 14:13:29 crc kubenswrapper[5108]: I1212 14:13:29.546205 5108 generic.go:358] "Generic (PLEG): container finished" podID="4273b638-c6db-4a97-bbf1-e7390f6b555a" containerID="829ac3a49873def3e3e5663f5fbd645b5146aff1b9132b17f5442a6e3b0f0e92" exitCode=0 Dec 12 14:13:29 crc kubenswrapper[5108]: I1212 14:13:29.546290 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pdqkf" event={"ID":"4273b638-c6db-4a97-bbf1-e7390f6b555a","Type":"ContainerDied","Data":"829ac3a49873def3e3e5663f5fbd645b5146aff1b9132b17f5442a6e3b0f0e92"} Dec 12 14:13:29 crc kubenswrapper[5108]: I1212 14:13:29.546340 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pdqkf" event={"ID":"4273b638-c6db-4a97-bbf1-e7390f6b555a","Type":"ContainerDied","Data":"29415488d986a8f8c40f371aebead671793e3898e55f80b55454eac3a734d690"} Dec 12 14:13:29 crc kubenswrapper[5108]: I1212 14:13:29.546536 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pdqkf" Dec 12 14:13:29 crc kubenswrapper[5108]: I1212 14:13:29.549531 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"4ea0244d-7268-460d-9943-a509ac061dbd","Type":"ContainerDied","Data":"02dea8e66df40d1d9a3619ba768d3281767961280bf3b0d3a45c983b212c6682"} Dec 12 14:13:29 crc kubenswrapper[5108]: I1212 14:13:29.549571 5108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="02dea8e66df40d1d9a3619ba768d3281767961280bf3b0d3a45c983b212c6682" Dec 12 14:13:29 crc kubenswrapper[5108]: I1212 14:13:29.549661 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 12 14:13:29 crc kubenswrapper[5108]: I1212 14:13:29.558477 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ljwkh"] Dec 12 14:13:29 crc kubenswrapper[5108]: I1212 14:13:29.558797 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-ljwkh" podUID="81f2fcb3-b42d-4bb0-9cb0-17ca67aaa401" containerName="registry-server" containerID="cri-o://5d6896569753e9473980bdb514aa1b002b00c3dfaf1dc1b61634ff5b83440d5b" gracePeriod=2 Dec 12 14:13:29 crc kubenswrapper[5108]: I1212 14:13:29.565608 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zp6xr"] Dec 12 14:13:29 crc kubenswrapper[5108]: I1212 14:13:29.565671 5108 scope.go:117] "RemoveContainer" containerID="976883b10d39727ac5944d32a2a5ce03f1edb02d8c3ec524ae9d0173748bd850" Dec 12 14:13:29 crc kubenswrapper[5108]: I1212 14:13:29.568107 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-r4bxk" Dec 12 14:13:29 crc kubenswrapper[5108]: I1212 14:13:29.573127 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-zp6xr"] Dec 12 14:13:29 crc kubenswrapper[5108]: I1212 14:13:29.589211 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pdqkf"] Dec 12 14:13:29 crc kubenswrapper[5108]: I1212 14:13:29.589663 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-pdqkf"] Dec 12 14:13:29 crc kubenswrapper[5108]: I1212 14:13:29.595409 5108 scope.go:117] "RemoveContainer" containerID="009a6f04e60c427b660d8fc803edd2ae7985cb916c6badc6f670ae87be833086" Dec 12 14:13:29 crc kubenswrapper[5108]: I1212 14:13:29.613180 5108 scope.go:117] "RemoveContainer" containerID="722e6ee2eed449c1b2b31a807a38ce498bcd4e8d4901f3bd615761efc0f6af4d" Dec 12 14:13:29 crc kubenswrapper[5108]: I1212 14:13:29.613391 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-r4bxk" Dec 12 14:13:29 crc kubenswrapper[5108]: E1212 14:13:29.613575 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"722e6ee2eed449c1b2b31a807a38ce498bcd4e8d4901f3bd615761efc0f6af4d\": container with ID starting with 722e6ee2eed449c1b2b31a807a38ce498bcd4e8d4901f3bd615761efc0f6af4d not found: ID does not exist" containerID="722e6ee2eed449c1b2b31a807a38ce498bcd4e8d4901f3bd615761efc0f6af4d" Dec 12 14:13:29 crc kubenswrapper[5108]: I1212 14:13:29.613627 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"722e6ee2eed449c1b2b31a807a38ce498bcd4e8d4901f3bd615761efc0f6af4d"} err="failed to get container status \"722e6ee2eed449c1b2b31a807a38ce498bcd4e8d4901f3bd615761efc0f6af4d\": rpc error: code = NotFound desc = could not find container \"722e6ee2eed449c1b2b31a807a38ce498bcd4e8d4901f3bd615761efc0f6af4d\": container with ID starting with 722e6ee2eed449c1b2b31a807a38ce498bcd4e8d4901f3bd615761efc0f6af4d not found: ID does not exist" Dec 12 14:13:29 crc kubenswrapper[5108]: I1212 14:13:29.613675 5108 scope.go:117] "RemoveContainer" containerID="976883b10d39727ac5944d32a2a5ce03f1edb02d8c3ec524ae9d0173748bd850" Dec 12 14:13:29 crc kubenswrapper[5108]: E1212 14:13:29.613974 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"976883b10d39727ac5944d32a2a5ce03f1edb02d8c3ec524ae9d0173748bd850\": container with ID starting with 976883b10d39727ac5944d32a2a5ce03f1edb02d8c3ec524ae9d0173748bd850 not found: ID does not exist" containerID="976883b10d39727ac5944d32a2a5ce03f1edb02d8c3ec524ae9d0173748bd850" Dec 12 14:13:29 crc kubenswrapper[5108]: I1212 14:13:29.614006 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"976883b10d39727ac5944d32a2a5ce03f1edb02d8c3ec524ae9d0173748bd850"} err="failed to get container status \"976883b10d39727ac5944d32a2a5ce03f1edb02d8c3ec524ae9d0173748bd850\": rpc error: code = NotFound desc = could not find container \"976883b10d39727ac5944d32a2a5ce03f1edb02d8c3ec524ae9d0173748bd850\": container with ID starting with 976883b10d39727ac5944d32a2a5ce03f1edb02d8c3ec524ae9d0173748bd850 not found: ID does not exist" Dec 12 14:13:29 crc kubenswrapper[5108]: I1212 14:13:29.614028 5108 scope.go:117] "RemoveContainer" containerID="009a6f04e60c427b660d8fc803edd2ae7985cb916c6badc6f670ae87be833086" Dec 12 14:13:29 crc kubenswrapper[5108]: E1212 14:13:29.614444 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"009a6f04e60c427b660d8fc803edd2ae7985cb916c6badc6f670ae87be833086\": container with ID starting with 009a6f04e60c427b660d8fc803edd2ae7985cb916c6badc6f670ae87be833086 not found: ID does not exist" containerID="009a6f04e60c427b660d8fc803edd2ae7985cb916c6badc6f670ae87be833086" Dec 12 14:13:29 crc kubenswrapper[5108]: I1212 14:13:29.614471 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"009a6f04e60c427b660d8fc803edd2ae7985cb916c6badc6f670ae87be833086"} err="failed to get container status \"009a6f04e60c427b660d8fc803edd2ae7985cb916c6badc6f670ae87be833086\": rpc error: code = NotFound desc = could not find container \"009a6f04e60c427b660d8fc803edd2ae7985cb916c6badc6f670ae87be833086\": container with ID starting with 009a6f04e60c427b660d8fc803edd2ae7985cb916c6badc6f670ae87be833086 not found: ID does not exist" Dec 12 14:13:29 crc kubenswrapper[5108]: I1212 14:13:29.614486 5108 scope.go:117] "RemoveContainer" containerID="829ac3a49873def3e3e5663f5fbd645b5146aff1b9132b17f5442a6e3b0f0e92" Dec 12 14:13:29 crc kubenswrapper[5108]: I1212 14:13:29.626074 5108 scope.go:117] "RemoveContainer" containerID="997d77dcf398cd37921c416b2fe39ceca8e88a03d0037dcd7faf08a2c0f49be2" Dec 12 14:13:29 crc kubenswrapper[5108]: I1212 14:13:29.640376 5108 scope.go:117] "RemoveContainer" containerID="9c384c4e7101f4eecd430ba00727e53a6e090c49ca476ad68cb270a2efda381c" Dec 12 14:13:29 crc kubenswrapper[5108]: I1212 14:13:29.657656 5108 scope.go:117] "RemoveContainer" containerID="829ac3a49873def3e3e5663f5fbd645b5146aff1b9132b17f5442a6e3b0f0e92" Dec 12 14:13:29 crc kubenswrapper[5108]: E1212 14:13:29.663499 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"829ac3a49873def3e3e5663f5fbd645b5146aff1b9132b17f5442a6e3b0f0e92\": container with ID starting with 829ac3a49873def3e3e5663f5fbd645b5146aff1b9132b17f5442a6e3b0f0e92 not found: ID does not exist" containerID="829ac3a49873def3e3e5663f5fbd645b5146aff1b9132b17f5442a6e3b0f0e92" Dec 12 14:13:29 crc kubenswrapper[5108]: I1212 14:13:29.663555 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"829ac3a49873def3e3e5663f5fbd645b5146aff1b9132b17f5442a6e3b0f0e92"} err="failed to get container status \"829ac3a49873def3e3e5663f5fbd645b5146aff1b9132b17f5442a6e3b0f0e92\": rpc error: code = NotFound desc = could not find container \"829ac3a49873def3e3e5663f5fbd645b5146aff1b9132b17f5442a6e3b0f0e92\": container with ID starting with 829ac3a49873def3e3e5663f5fbd645b5146aff1b9132b17f5442a6e3b0f0e92 not found: ID does not exist" Dec 12 14:13:29 crc kubenswrapper[5108]: I1212 14:13:29.663586 5108 scope.go:117] "RemoveContainer" containerID="997d77dcf398cd37921c416b2fe39ceca8e88a03d0037dcd7faf08a2c0f49be2" Dec 12 14:13:29 crc kubenswrapper[5108]: E1212 14:13:29.664846 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"997d77dcf398cd37921c416b2fe39ceca8e88a03d0037dcd7faf08a2c0f49be2\": container with ID starting with 997d77dcf398cd37921c416b2fe39ceca8e88a03d0037dcd7faf08a2c0f49be2 not found: ID does not exist" containerID="997d77dcf398cd37921c416b2fe39ceca8e88a03d0037dcd7faf08a2c0f49be2" Dec 12 14:13:29 crc kubenswrapper[5108]: I1212 14:13:29.664883 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"997d77dcf398cd37921c416b2fe39ceca8e88a03d0037dcd7faf08a2c0f49be2"} err="failed to get container status \"997d77dcf398cd37921c416b2fe39ceca8e88a03d0037dcd7faf08a2c0f49be2\": rpc error: code = NotFound desc = could not find container \"997d77dcf398cd37921c416b2fe39ceca8e88a03d0037dcd7faf08a2c0f49be2\": container with ID starting with 997d77dcf398cd37921c416b2fe39ceca8e88a03d0037dcd7faf08a2c0f49be2 not found: ID does not exist" Dec 12 14:13:29 crc kubenswrapper[5108]: I1212 14:13:29.664905 5108 scope.go:117] "RemoveContainer" containerID="9c384c4e7101f4eecd430ba00727e53a6e090c49ca476ad68cb270a2efda381c" Dec 12 14:13:29 crc kubenswrapper[5108]: E1212 14:13:29.665387 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c384c4e7101f4eecd430ba00727e53a6e090c49ca476ad68cb270a2efda381c\": container with ID starting with 9c384c4e7101f4eecd430ba00727e53a6e090c49ca476ad68cb270a2efda381c not found: ID does not exist" containerID="9c384c4e7101f4eecd430ba00727e53a6e090c49ca476ad68cb270a2efda381c" Dec 12 14:13:29 crc kubenswrapper[5108]: I1212 14:13:29.665426 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c384c4e7101f4eecd430ba00727e53a6e090c49ca476ad68cb270a2efda381c"} err="failed to get container status \"9c384c4e7101f4eecd430ba00727e53a6e090c49ca476ad68cb270a2efda381c\": rpc error: code = NotFound desc = could not find container \"9c384c4e7101f4eecd430ba00727e53a6e090c49ca476ad68cb270a2efda381c\": container with ID starting with 9c384c4e7101f4eecd430ba00727e53a6e090c49ca476ad68cb270a2efda381c not found: ID does not exist" Dec 12 14:13:29 crc kubenswrapper[5108]: I1212 14:13:29.734194 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-bfb2d" Dec 12 14:13:29 crc kubenswrapper[5108]: I1212 14:13:29.765514 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-bfb2d" Dec 12 14:13:30 crc kubenswrapper[5108]: I1212 14:13:30.086920 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Dec 12 14:13:30 crc kubenswrapper[5108]: I1212 14:13:30.087503 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4273b638-c6db-4a97-bbf1-e7390f6b555a" containerName="registry-server" Dec 12 14:13:30 crc kubenswrapper[5108]: I1212 14:13:30.087522 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="4273b638-c6db-4a97-bbf1-e7390f6b555a" containerName="registry-server" Dec 12 14:13:30 crc kubenswrapper[5108]: I1212 14:13:30.087539 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4273b638-c6db-4a97-bbf1-e7390f6b555a" containerName="extract-utilities" Dec 12 14:13:30 crc kubenswrapper[5108]: I1212 14:13:30.087547 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="4273b638-c6db-4a97-bbf1-e7390f6b555a" containerName="extract-utilities" Dec 12 14:13:30 crc kubenswrapper[5108]: I1212 14:13:30.087567 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d9418022-cc61-44ab-99a4-4afbf84fad60" containerName="registry-server" Dec 12 14:13:30 crc kubenswrapper[5108]: I1212 14:13:30.087575 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9418022-cc61-44ab-99a4-4afbf84fad60" containerName="registry-server" Dec 12 14:13:30 crc kubenswrapper[5108]: I1212 14:13:30.087583 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4ea0244d-7268-460d-9943-a509ac061dbd" containerName="pruner" Dec 12 14:13:30 crc kubenswrapper[5108]: I1212 14:13:30.087590 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ea0244d-7268-460d-9943-a509ac061dbd" containerName="pruner" Dec 12 14:13:30 crc kubenswrapper[5108]: I1212 14:13:30.087603 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4273b638-c6db-4a97-bbf1-e7390f6b555a" containerName="extract-content" Dec 12 14:13:30 crc kubenswrapper[5108]: I1212 14:13:30.087608 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="4273b638-c6db-4a97-bbf1-e7390f6b555a" containerName="extract-content" Dec 12 14:13:30 crc kubenswrapper[5108]: I1212 14:13:30.087617 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d9418022-cc61-44ab-99a4-4afbf84fad60" containerName="extract-utilities" Dec 12 14:13:30 crc kubenswrapper[5108]: I1212 14:13:30.087622 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9418022-cc61-44ab-99a4-4afbf84fad60" containerName="extract-utilities" Dec 12 14:13:30 crc kubenswrapper[5108]: I1212 14:13:30.087639 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d9418022-cc61-44ab-99a4-4afbf84fad60" containerName="extract-content" Dec 12 14:13:30 crc kubenswrapper[5108]: I1212 14:13:30.087645 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9418022-cc61-44ab-99a4-4afbf84fad60" containerName="extract-content" Dec 12 14:13:30 crc kubenswrapper[5108]: I1212 14:13:30.087727 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="d9418022-cc61-44ab-99a4-4afbf84fad60" containerName="registry-server" Dec 12 14:13:30 crc kubenswrapper[5108]: I1212 14:13:30.087739 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="4ea0244d-7268-460d-9943-a509ac061dbd" containerName="pruner" Dec 12 14:13:30 crc kubenswrapper[5108]: I1212 14:13:30.087748 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="4273b638-c6db-4a97-bbf1-e7390f6b555a" containerName="registry-server" Dec 12 14:13:30 crc kubenswrapper[5108]: I1212 14:13:30.165522 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Dec 12 14:13:30 crc kubenswrapper[5108]: I1212 14:13:30.165653 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Dec 12 14:13:30 crc kubenswrapper[5108]: I1212 14:13:30.167718 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver\"/\"installer-sa-dockercfg-bqqnb\"" Dec 12 14:13:30 crc kubenswrapper[5108]: I1212 14:13:30.167862 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver\"/\"kube-root-ca.crt\"" Dec 12 14:13:30 crc kubenswrapper[5108]: I1212 14:13:30.203745 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d58d8628-3aff-4f44-a12d-0e2df4f3ad87-kube-api-access\") pod \"installer-12-crc\" (UID: \"d58d8628-3aff-4f44-a12d-0e2df4f3ad87\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 12 14:13:30 crc kubenswrapper[5108]: I1212 14:13:30.203952 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d58d8628-3aff-4f44-a12d-0e2df4f3ad87-var-lock\") pod \"installer-12-crc\" (UID: \"d58d8628-3aff-4f44-a12d-0e2df4f3ad87\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 12 14:13:30 crc kubenswrapper[5108]: I1212 14:13:30.204031 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d58d8628-3aff-4f44-a12d-0e2df4f3ad87-kubelet-dir\") pod \"installer-12-crc\" (UID: \"d58d8628-3aff-4f44-a12d-0e2df4f3ad87\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 12 14:13:30 crc kubenswrapper[5108]: I1212 14:13:30.305200 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d58d8628-3aff-4f44-a12d-0e2df4f3ad87-kubelet-dir\") pod \"installer-12-crc\" (UID: \"d58d8628-3aff-4f44-a12d-0e2df4f3ad87\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 12 14:13:30 crc kubenswrapper[5108]: I1212 14:13:30.305293 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d58d8628-3aff-4f44-a12d-0e2df4f3ad87-kube-api-access\") pod \"installer-12-crc\" (UID: \"d58d8628-3aff-4f44-a12d-0e2df4f3ad87\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 12 14:13:30 crc kubenswrapper[5108]: I1212 14:13:30.305371 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d58d8628-3aff-4f44-a12d-0e2df4f3ad87-var-lock\") pod \"installer-12-crc\" (UID: \"d58d8628-3aff-4f44-a12d-0e2df4f3ad87\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 12 14:13:30 crc kubenswrapper[5108]: I1212 14:13:30.305455 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d58d8628-3aff-4f44-a12d-0e2df4f3ad87-var-lock\") pod \"installer-12-crc\" (UID: \"d58d8628-3aff-4f44-a12d-0e2df4f3ad87\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 12 14:13:30 crc kubenswrapper[5108]: I1212 14:13:30.305506 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d58d8628-3aff-4f44-a12d-0e2df4f3ad87-kubelet-dir\") pod \"installer-12-crc\" (UID: \"d58d8628-3aff-4f44-a12d-0e2df4f3ad87\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 12 14:13:30 crc kubenswrapper[5108]: I1212 14:13:30.338664 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d58d8628-3aff-4f44-a12d-0e2df4f3ad87-kube-api-access\") pod \"installer-12-crc\" (UID: \"d58d8628-3aff-4f44-a12d-0e2df4f3ad87\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 12 14:13:30 crc kubenswrapper[5108]: I1212 14:13:30.381727 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ljwkh" Dec 12 14:13:30 crc kubenswrapper[5108]: I1212 14:13:30.508057 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81f2fcb3-b42d-4bb0-9cb0-17ca67aaa401-utilities\") pod \"81f2fcb3-b42d-4bb0-9cb0-17ca67aaa401\" (UID: \"81f2fcb3-b42d-4bb0-9cb0-17ca67aaa401\") " Dec 12 14:13:30 crc kubenswrapper[5108]: I1212 14:13:30.508182 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nng8g\" (UniqueName: \"kubernetes.io/projected/81f2fcb3-b42d-4bb0-9cb0-17ca67aaa401-kube-api-access-nng8g\") pod \"81f2fcb3-b42d-4bb0-9cb0-17ca67aaa401\" (UID: \"81f2fcb3-b42d-4bb0-9cb0-17ca67aaa401\") " Dec 12 14:13:30 crc kubenswrapper[5108]: I1212 14:13:30.508320 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81f2fcb3-b42d-4bb0-9cb0-17ca67aaa401-catalog-content\") pod \"81f2fcb3-b42d-4bb0-9cb0-17ca67aaa401\" (UID: \"81f2fcb3-b42d-4bb0-9cb0-17ca67aaa401\") " Dec 12 14:13:30 crc kubenswrapper[5108]: I1212 14:13:30.509740 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/81f2fcb3-b42d-4bb0-9cb0-17ca67aaa401-utilities" (OuterVolumeSpecName: "utilities") pod "81f2fcb3-b42d-4bb0-9cb0-17ca67aaa401" (UID: "81f2fcb3-b42d-4bb0-9cb0-17ca67aaa401"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:13:30 crc kubenswrapper[5108]: I1212 14:13:30.513764 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81f2fcb3-b42d-4bb0-9cb0-17ca67aaa401-kube-api-access-nng8g" (OuterVolumeSpecName: "kube-api-access-nng8g") pod "81f2fcb3-b42d-4bb0-9cb0-17ca67aaa401" (UID: "81f2fcb3-b42d-4bb0-9cb0-17ca67aaa401"). InnerVolumeSpecName "kube-api-access-nng8g". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:13:30 crc kubenswrapper[5108]: I1212 14:13:30.519298 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/81f2fcb3-b42d-4bb0-9cb0-17ca67aaa401-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "81f2fcb3-b42d-4bb0-9cb0-17ca67aaa401" (UID: "81f2fcb3-b42d-4bb0-9cb0-17ca67aaa401"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:13:30 crc kubenswrapper[5108]: I1212 14:13:30.539975 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Dec 12 14:13:30 crc kubenswrapper[5108]: I1212 14:13:30.557842 5108 generic.go:358] "Generic (PLEG): container finished" podID="81f2fcb3-b42d-4bb0-9cb0-17ca67aaa401" containerID="5d6896569753e9473980bdb514aa1b002b00c3dfaf1dc1b61634ff5b83440d5b" exitCode=0 Dec 12 14:13:30 crc kubenswrapper[5108]: I1212 14:13:30.557891 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ljwkh" event={"ID":"81f2fcb3-b42d-4bb0-9cb0-17ca67aaa401","Type":"ContainerDied","Data":"5d6896569753e9473980bdb514aa1b002b00c3dfaf1dc1b61634ff5b83440d5b"} Dec 12 14:13:30 crc kubenswrapper[5108]: I1212 14:13:30.557920 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ljwkh" Dec 12 14:13:30 crc kubenswrapper[5108]: I1212 14:13:30.557954 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ljwkh" event={"ID":"81f2fcb3-b42d-4bb0-9cb0-17ca67aaa401","Type":"ContainerDied","Data":"609328d82ab0590e9acd5198f0a960395486655ef1ad0ee24b4a3a51c7d2cb4e"} Dec 12 14:13:30 crc kubenswrapper[5108]: I1212 14:13:30.557975 5108 scope.go:117] "RemoveContainer" containerID="5d6896569753e9473980bdb514aa1b002b00c3dfaf1dc1b61634ff5b83440d5b" Dec 12 14:13:30 crc kubenswrapper[5108]: I1212 14:13:30.576445 5108 scope.go:117] "RemoveContainer" containerID="9404b970ba792a167d91998c8cd73a2104eade0276681078660d8e6979618d7e" Dec 12 14:13:30 crc kubenswrapper[5108]: I1212 14:13:30.582455 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ljwkh"] Dec 12 14:13:30 crc kubenswrapper[5108]: I1212 14:13:30.586480 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-ljwkh"] Dec 12 14:13:30 crc kubenswrapper[5108]: I1212 14:13:30.606839 5108 scope.go:117] "RemoveContainer" containerID="34cab8602f4c092930f78ec38fef52b4cad3e4bfedc1fa439269412a99031046" Dec 12 14:13:30 crc kubenswrapper[5108]: I1212 14:13:30.610791 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nng8g\" (UniqueName: \"kubernetes.io/projected/81f2fcb3-b42d-4bb0-9cb0-17ca67aaa401-kube-api-access-nng8g\") on node \"crc\" DevicePath \"\"" Dec 12 14:13:30 crc kubenswrapper[5108]: I1212 14:13:30.610811 5108 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81f2fcb3-b42d-4bb0-9cb0-17ca67aaa401-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 14:13:30 crc kubenswrapper[5108]: I1212 14:13:30.610822 5108 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81f2fcb3-b42d-4bb0-9cb0-17ca67aaa401-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 14:13:30 crc kubenswrapper[5108]: I1212 14:13:30.622467 5108 scope.go:117] "RemoveContainer" containerID="5d6896569753e9473980bdb514aa1b002b00c3dfaf1dc1b61634ff5b83440d5b" Dec 12 14:13:30 crc kubenswrapper[5108]: E1212 14:13:30.622816 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d6896569753e9473980bdb514aa1b002b00c3dfaf1dc1b61634ff5b83440d5b\": container with ID starting with 5d6896569753e9473980bdb514aa1b002b00c3dfaf1dc1b61634ff5b83440d5b not found: ID does not exist" containerID="5d6896569753e9473980bdb514aa1b002b00c3dfaf1dc1b61634ff5b83440d5b" Dec 12 14:13:30 crc kubenswrapper[5108]: I1212 14:13:30.622856 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d6896569753e9473980bdb514aa1b002b00c3dfaf1dc1b61634ff5b83440d5b"} err="failed to get container status \"5d6896569753e9473980bdb514aa1b002b00c3dfaf1dc1b61634ff5b83440d5b\": rpc error: code = NotFound desc = could not find container \"5d6896569753e9473980bdb514aa1b002b00c3dfaf1dc1b61634ff5b83440d5b\": container with ID starting with 5d6896569753e9473980bdb514aa1b002b00c3dfaf1dc1b61634ff5b83440d5b not found: ID does not exist" Dec 12 14:13:30 crc kubenswrapper[5108]: I1212 14:13:30.622882 5108 scope.go:117] "RemoveContainer" containerID="9404b970ba792a167d91998c8cd73a2104eade0276681078660d8e6979618d7e" Dec 12 14:13:30 crc kubenswrapper[5108]: E1212 14:13:30.630229 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9404b970ba792a167d91998c8cd73a2104eade0276681078660d8e6979618d7e\": container with ID starting with 9404b970ba792a167d91998c8cd73a2104eade0276681078660d8e6979618d7e not found: ID does not exist" containerID="9404b970ba792a167d91998c8cd73a2104eade0276681078660d8e6979618d7e" Dec 12 14:13:30 crc kubenswrapper[5108]: I1212 14:13:30.630270 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9404b970ba792a167d91998c8cd73a2104eade0276681078660d8e6979618d7e"} err="failed to get container status \"9404b970ba792a167d91998c8cd73a2104eade0276681078660d8e6979618d7e\": rpc error: code = NotFound desc = could not find container \"9404b970ba792a167d91998c8cd73a2104eade0276681078660d8e6979618d7e\": container with ID starting with 9404b970ba792a167d91998c8cd73a2104eade0276681078660d8e6979618d7e not found: ID does not exist" Dec 12 14:13:30 crc kubenswrapper[5108]: I1212 14:13:30.630299 5108 scope.go:117] "RemoveContainer" containerID="34cab8602f4c092930f78ec38fef52b4cad3e4bfedc1fa439269412a99031046" Dec 12 14:13:30 crc kubenswrapper[5108]: E1212 14:13:30.631857 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"34cab8602f4c092930f78ec38fef52b4cad3e4bfedc1fa439269412a99031046\": container with ID starting with 34cab8602f4c092930f78ec38fef52b4cad3e4bfedc1fa439269412a99031046 not found: ID does not exist" containerID="34cab8602f4c092930f78ec38fef52b4cad3e4bfedc1fa439269412a99031046" Dec 12 14:13:30 crc kubenswrapper[5108]: I1212 14:13:30.631899 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34cab8602f4c092930f78ec38fef52b4cad3e4bfedc1fa439269412a99031046"} err="failed to get container status \"34cab8602f4c092930f78ec38fef52b4cad3e4bfedc1fa439269412a99031046\": rpc error: code = NotFound desc = could not find container \"34cab8602f4c092930f78ec38fef52b4cad3e4bfedc1fa439269412a99031046\": container with ID starting with 34cab8602f4c092930f78ec38fef52b4cad3e4bfedc1fa439269412a99031046 not found: ID does not exist" Dec 12 14:13:30 crc kubenswrapper[5108]: I1212 14:13:30.722592 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Dec 12 14:13:31 crc kubenswrapper[5108]: I1212 14:13:31.417421 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4273b638-c6db-4a97-bbf1-e7390f6b555a" path="/var/lib/kubelet/pods/4273b638-c6db-4a97-bbf1-e7390f6b555a/volumes" Dec 12 14:13:31 crc kubenswrapper[5108]: I1212 14:13:31.418292 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81f2fcb3-b42d-4bb0-9cb0-17ca67aaa401" path="/var/lib/kubelet/pods/81f2fcb3-b42d-4bb0-9cb0-17ca67aaa401/volumes" Dec 12 14:13:31 crc kubenswrapper[5108]: I1212 14:13:31.418983 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d9418022-cc61-44ab-99a4-4afbf84fad60" path="/var/lib/kubelet/pods/d9418022-cc61-44ab-99a4-4afbf84fad60/volumes" Dec 12 14:13:31 crc kubenswrapper[5108]: I1212 14:13:31.568404 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"d58d8628-3aff-4f44-a12d-0e2df4f3ad87","Type":"ContainerStarted","Data":"72a3b23f86f0af6fac3a768a8fd5fe5ded18a239bab65858e7649ff0ea293c31"} Dec 12 14:13:33 crc kubenswrapper[5108]: I1212 14:13:33.583822 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"d58d8628-3aff-4f44-a12d-0e2df4f3ad87","Type":"ContainerStarted","Data":"248838e55535bdc59f6cb365c0b11ed34479d863fd8316ddb4e5b76c95bb9bfa"} Dec 12 14:13:33 crc kubenswrapper[5108]: I1212 14:13:33.955424 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-12-crc" podStartSLOduration=3.955404392 podStartE2EDuration="3.955404392s" podCreationTimestamp="2025-12-12 14:13:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 14:13:33.60206572 +0000 UTC m=+166.510056869" watchObservedRunningTime="2025-12-12 14:13:33.955404392 +0000 UTC m=+166.863395561" Dec 12 14:13:33 crc kubenswrapper[5108]: I1212 14:13:33.958464 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-r4bxk"] Dec 12 14:13:33 crc kubenswrapper[5108]: I1212 14:13:33.958784 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-r4bxk" podUID="aa7c7c10-bd3b-4044-8aa8-7875e9f908a7" containerName="registry-server" containerID="cri-o://38b81aab5874b22722b0bb3fc335bcd386214c324ca0189dc5e84e5804e22cf9" gracePeriod=2 Dec 12 14:13:34 crc kubenswrapper[5108]: I1212 14:13:34.292763 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r4bxk" Dec 12 14:13:34 crc kubenswrapper[5108]: I1212 14:13:34.362265 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-snpcc\" (UniqueName: \"kubernetes.io/projected/aa7c7c10-bd3b-4044-8aa8-7875e9f908a7-kube-api-access-snpcc\") pod \"aa7c7c10-bd3b-4044-8aa8-7875e9f908a7\" (UID: \"aa7c7c10-bd3b-4044-8aa8-7875e9f908a7\") " Dec 12 14:13:34 crc kubenswrapper[5108]: I1212 14:13:34.362351 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa7c7c10-bd3b-4044-8aa8-7875e9f908a7-catalog-content\") pod \"aa7c7c10-bd3b-4044-8aa8-7875e9f908a7\" (UID: \"aa7c7c10-bd3b-4044-8aa8-7875e9f908a7\") " Dec 12 14:13:34 crc kubenswrapper[5108]: I1212 14:13:34.362392 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa7c7c10-bd3b-4044-8aa8-7875e9f908a7-utilities\") pod \"aa7c7c10-bd3b-4044-8aa8-7875e9f908a7\" (UID: \"aa7c7c10-bd3b-4044-8aa8-7875e9f908a7\") " Dec 12 14:13:34 crc kubenswrapper[5108]: I1212 14:13:34.363602 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa7c7c10-bd3b-4044-8aa8-7875e9f908a7-utilities" (OuterVolumeSpecName: "utilities") pod "aa7c7c10-bd3b-4044-8aa8-7875e9f908a7" (UID: "aa7c7c10-bd3b-4044-8aa8-7875e9f908a7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:13:34 crc kubenswrapper[5108]: I1212 14:13:34.368140 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa7c7c10-bd3b-4044-8aa8-7875e9f908a7-kube-api-access-snpcc" (OuterVolumeSpecName: "kube-api-access-snpcc") pod "aa7c7c10-bd3b-4044-8aa8-7875e9f908a7" (UID: "aa7c7c10-bd3b-4044-8aa8-7875e9f908a7"). InnerVolumeSpecName "kube-api-access-snpcc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:13:34 crc kubenswrapper[5108]: I1212 14:13:34.464336 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-snpcc\" (UniqueName: \"kubernetes.io/projected/aa7c7c10-bd3b-4044-8aa8-7875e9f908a7-kube-api-access-snpcc\") on node \"crc\" DevicePath \"\"" Dec 12 14:13:34 crc kubenswrapper[5108]: I1212 14:13:34.464367 5108 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa7c7c10-bd3b-4044-8aa8-7875e9f908a7-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 14:13:34 crc kubenswrapper[5108]: I1212 14:13:34.469219 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa7c7c10-bd3b-4044-8aa8-7875e9f908a7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "aa7c7c10-bd3b-4044-8aa8-7875e9f908a7" (UID: "aa7c7c10-bd3b-4044-8aa8-7875e9f908a7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:13:34 crc kubenswrapper[5108]: I1212 14:13:34.565425 5108 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa7c7c10-bd3b-4044-8aa8-7875e9f908a7-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 14:13:34 crc kubenswrapper[5108]: I1212 14:13:34.591340 5108 generic.go:358] "Generic (PLEG): container finished" podID="aa7c7c10-bd3b-4044-8aa8-7875e9f908a7" containerID="38b81aab5874b22722b0bb3fc335bcd386214c324ca0189dc5e84e5804e22cf9" exitCode=0 Dec 12 14:13:34 crc kubenswrapper[5108]: I1212 14:13:34.591432 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r4bxk" Dec 12 14:13:34 crc kubenswrapper[5108]: I1212 14:13:34.591414 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r4bxk" event={"ID":"aa7c7c10-bd3b-4044-8aa8-7875e9f908a7","Type":"ContainerDied","Data":"38b81aab5874b22722b0bb3fc335bcd386214c324ca0189dc5e84e5804e22cf9"} Dec 12 14:13:34 crc kubenswrapper[5108]: I1212 14:13:34.591599 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r4bxk" event={"ID":"aa7c7c10-bd3b-4044-8aa8-7875e9f908a7","Type":"ContainerDied","Data":"308b4d885c432ea4adecea1a46074b66e94873858018cec2eaaa77b05a0ce336"} Dec 12 14:13:34 crc kubenswrapper[5108]: I1212 14:13:34.591628 5108 scope.go:117] "RemoveContainer" containerID="38b81aab5874b22722b0bb3fc335bcd386214c324ca0189dc5e84e5804e22cf9" Dec 12 14:13:34 crc kubenswrapper[5108]: I1212 14:13:34.607427 5108 scope.go:117] "RemoveContainer" containerID="3ca53f71bc480d3f6dd40a17e7ecbbe6ea7c29e6b8af26e19c34337f40d61514" Dec 12 14:13:34 crc kubenswrapper[5108]: I1212 14:13:34.621692 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-r4bxk"] Dec 12 14:13:34 crc kubenswrapper[5108]: I1212 14:13:34.622839 5108 scope.go:117] "RemoveContainer" containerID="38e31e2ecab26e58df51647a33e82f81a72b42facc4db964adcd65d97d494346" Dec 12 14:13:34 crc kubenswrapper[5108]: I1212 14:13:34.624823 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-r4bxk"] Dec 12 14:13:34 crc kubenswrapper[5108]: I1212 14:13:34.636834 5108 scope.go:117] "RemoveContainer" containerID="38b81aab5874b22722b0bb3fc335bcd386214c324ca0189dc5e84e5804e22cf9" Dec 12 14:13:34 crc kubenswrapper[5108]: E1212 14:13:34.638015 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"38b81aab5874b22722b0bb3fc335bcd386214c324ca0189dc5e84e5804e22cf9\": container with ID starting with 38b81aab5874b22722b0bb3fc335bcd386214c324ca0189dc5e84e5804e22cf9 not found: ID does not exist" containerID="38b81aab5874b22722b0bb3fc335bcd386214c324ca0189dc5e84e5804e22cf9" Dec 12 14:13:34 crc kubenswrapper[5108]: I1212 14:13:34.638052 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38b81aab5874b22722b0bb3fc335bcd386214c324ca0189dc5e84e5804e22cf9"} err="failed to get container status \"38b81aab5874b22722b0bb3fc335bcd386214c324ca0189dc5e84e5804e22cf9\": rpc error: code = NotFound desc = could not find container \"38b81aab5874b22722b0bb3fc335bcd386214c324ca0189dc5e84e5804e22cf9\": container with ID starting with 38b81aab5874b22722b0bb3fc335bcd386214c324ca0189dc5e84e5804e22cf9 not found: ID does not exist" Dec 12 14:13:34 crc kubenswrapper[5108]: I1212 14:13:34.638071 5108 scope.go:117] "RemoveContainer" containerID="3ca53f71bc480d3f6dd40a17e7ecbbe6ea7c29e6b8af26e19c34337f40d61514" Dec 12 14:13:34 crc kubenswrapper[5108]: E1212 14:13:34.639001 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ca53f71bc480d3f6dd40a17e7ecbbe6ea7c29e6b8af26e19c34337f40d61514\": container with ID starting with 3ca53f71bc480d3f6dd40a17e7ecbbe6ea7c29e6b8af26e19c34337f40d61514 not found: ID does not exist" containerID="3ca53f71bc480d3f6dd40a17e7ecbbe6ea7c29e6b8af26e19c34337f40d61514" Dec 12 14:13:34 crc kubenswrapper[5108]: I1212 14:13:34.639033 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ca53f71bc480d3f6dd40a17e7ecbbe6ea7c29e6b8af26e19c34337f40d61514"} err="failed to get container status \"3ca53f71bc480d3f6dd40a17e7ecbbe6ea7c29e6b8af26e19c34337f40d61514\": rpc error: code = NotFound desc = could not find container \"3ca53f71bc480d3f6dd40a17e7ecbbe6ea7c29e6b8af26e19c34337f40d61514\": container with ID starting with 3ca53f71bc480d3f6dd40a17e7ecbbe6ea7c29e6b8af26e19c34337f40d61514 not found: ID does not exist" Dec 12 14:13:34 crc kubenswrapper[5108]: I1212 14:13:34.639051 5108 scope.go:117] "RemoveContainer" containerID="38e31e2ecab26e58df51647a33e82f81a72b42facc4db964adcd65d97d494346" Dec 12 14:13:34 crc kubenswrapper[5108]: E1212 14:13:34.641575 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"38e31e2ecab26e58df51647a33e82f81a72b42facc4db964adcd65d97d494346\": container with ID starting with 38e31e2ecab26e58df51647a33e82f81a72b42facc4db964adcd65d97d494346 not found: ID does not exist" containerID="38e31e2ecab26e58df51647a33e82f81a72b42facc4db964adcd65d97d494346" Dec 12 14:13:34 crc kubenswrapper[5108]: I1212 14:13:34.641604 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38e31e2ecab26e58df51647a33e82f81a72b42facc4db964adcd65d97d494346"} err="failed to get container status \"38e31e2ecab26e58df51647a33e82f81a72b42facc4db964adcd65d97d494346\": rpc error: code = NotFound desc = could not find container \"38e31e2ecab26e58df51647a33e82f81a72b42facc4db964adcd65d97d494346\": container with ID starting with 38e31e2ecab26e58df51647a33e82f81a72b42facc4db964adcd65d97d494346 not found: ID does not exist" Dec 12 14:13:35 crc kubenswrapper[5108]: I1212 14:13:35.416993 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa7c7c10-bd3b-4044-8aa8-7875e9f908a7" path="/var/lib/kubelet/pods/aa7c7c10-bd3b-4044-8aa8-7875e9f908a7/volumes" Dec 12 14:14:06 crc kubenswrapper[5108]: I1212 14:14:06.370066 5108 ???:1] "http: TLS handshake error from 192.168.126.11:47798: no serving certificate available for the kubelet" Dec 12 14:14:10 crc kubenswrapper[5108]: I1212 14:14:10.908950 5108 kubelet.go:2547] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 12 14:14:10 crc kubenswrapper[5108]: I1212 14:14:10.909925 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" containerID="cri-o://c86b90b7193b7901a601e3c65d607b6a5d462bb1a15a5010bf19e9c6d6036966" gracePeriod=15 Dec 12 14:14:10 crc kubenswrapper[5108]: I1212 14:14:10.909984 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://a68f2bbf194146ec111e9aa5753f597962de280e957a0a3ef23dd79173160003" gracePeriod=15 Dec 12 14:14:10 crc kubenswrapper[5108]: I1212 14:14:10.910065 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" containerID="cri-o://d58b2258046b38d981e06423ad8bc68ff6163533de83a36d6beda29bc02b1da8" gracePeriod=15 Dec 12 14:14:10 crc kubenswrapper[5108]: I1212 14:14:10.910071 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://9d9ecc493821faa4c2f5a082084284166d20dc85fb2afd25d3e8014cb324543f" gracePeriod=15 Dec 12 14:14:10 crc kubenswrapper[5108]: I1212 14:14:10.910246 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" containerID="cri-o://b22f9adfac1a559761b36302fddb52bf8ecdbacd43582fa5635b709cd8d6dd18" gracePeriod=15 Dec 12 14:14:10 crc kubenswrapper[5108]: I1212 14:14:10.911527 5108 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 12 14:14:10 crc kubenswrapper[5108]: I1212 14:14:10.915398 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="81f2fcb3-b42d-4bb0-9cb0-17ca67aaa401" containerName="extract-utilities" Dec 12 14:14:10 crc kubenswrapper[5108]: I1212 14:14:10.915434 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="81f2fcb3-b42d-4bb0-9cb0-17ca67aaa401" containerName="extract-utilities" Dec 12 14:14:10 crc kubenswrapper[5108]: I1212 14:14:10.915456 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Dec 12 14:14:10 crc kubenswrapper[5108]: I1212 14:14:10.915465 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Dec 12 14:14:10 crc kubenswrapper[5108]: I1212 14:14:10.915485 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="setup" Dec 12 14:14:10 crc kubenswrapper[5108]: I1212 14:14:10.915492 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="setup" Dec 12 14:14:10 crc kubenswrapper[5108]: I1212 14:14:10.915511 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Dec 12 14:14:10 crc kubenswrapper[5108]: I1212 14:14:10.915520 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Dec 12 14:14:10 crc kubenswrapper[5108]: I1212 14:14:10.915532 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Dec 12 14:14:10 crc kubenswrapper[5108]: I1212 14:14:10.915538 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Dec 12 14:14:10 crc kubenswrapper[5108]: I1212 14:14:10.915561 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="aa7c7c10-bd3b-4044-8aa8-7875e9f908a7" containerName="registry-server" Dec 12 14:14:10 crc kubenswrapper[5108]: I1212 14:14:10.915569 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa7c7c10-bd3b-4044-8aa8-7875e9f908a7" containerName="registry-server" Dec 12 14:14:10 crc kubenswrapper[5108]: I1212 14:14:10.915585 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 12 14:14:10 crc kubenswrapper[5108]: I1212 14:14:10.915592 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 12 14:14:10 crc kubenswrapper[5108]: I1212 14:14:10.915624 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 12 14:14:10 crc kubenswrapper[5108]: I1212 14:14:10.915632 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 12 14:14:10 crc kubenswrapper[5108]: I1212 14:14:10.915647 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="aa7c7c10-bd3b-4044-8aa8-7875e9f908a7" containerName="extract-utilities" Dec 12 14:14:10 crc kubenswrapper[5108]: I1212 14:14:10.915655 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa7c7c10-bd3b-4044-8aa8-7875e9f908a7" containerName="extract-utilities" Dec 12 14:14:10 crc kubenswrapper[5108]: I1212 14:14:10.915670 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="81f2fcb3-b42d-4bb0-9cb0-17ca67aaa401" containerName="extract-content" Dec 12 14:14:10 crc kubenswrapper[5108]: I1212 14:14:10.915677 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="81f2fcb3-b42d-4bb0-9cb0-17ca67aaa401" containerName="extract-content" Dec 12 14:14:10 crc kubenswrapper[5108]: I1212 14:14:10.915693 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="aa7c7c10-bd3b-4044-8aa8-7875e9f908a7" containerName="extract-content" Dec 12 14:14:10 crc kubenswrapper[5108]: I1212 14:14:10.915700 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa7c7c10-bd3b-4044-8aa8-7875e9f908a7" containerName="extract-content" Dec 12 14:14:10 crc kubenswrapper[5108]: I1212 14:14:10.915737 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 12 14:14:10 crc kubenswrapper[5108]: I1212 14:14:10.915778 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 12 14:14:10 crc kubenswrapper[5108]: I1212 14:14:10.915798 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Dec 12 14:14:10 crc kubenswrapper[5108]: I1212 14:14:10.915806 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Dec 12 14:14:10 crc kubenswrapper[5108]: I1212 14:14:10.915820 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="81f2fcb3-b42d-4bb0-9cb0-17ca67aaa401" containerName="registry-server" Dec 12 14:14:10 crc kubenswrapper[5108]: I1212 14:14:10.915831 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="81f2fcb3-b42d-4bb0-9cb0-17ca67aaa401" containerName="registry-server" Dec 12 14:14:10 crc kubenswrapper[5108]: I1212 14:14:10.916838 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="aa7c7c10-bd3b-4044-8aa8-7875e9f908a7" containerName="registry-server" Dec 12 14:14:10 crc kubenswrapper[5108]: I1212 14:14:10.916867 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Dec 12 14:14:10 crc kubenswrapper[5108]: I1212 14:14:10.916884 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 12 14:14:10 crc kubenswrapper[5108]: I1212 14:14:10.916895 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 12 14:14:10 crc kubenswrapper[5108]: I1212 14:14:10.916911 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="81f2fcb3-b42d-4bb0-9cb0-17ca67aaa401" containerName="registry-server" Dec 12 14:14:10 crc kubenswrapper[5108]: I1212 14:14:10.916922 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 12 14:14:10 crc kubenswrapper[5108]: I1212 14:14:10.916936 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Dec 12 14:14:10 crc kubenswrapper[5108]: I1212 14:14:10.916951 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Dec 12 14:14:10 crc kubenswrapper[5108]: I1212 14:14:10.916964 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 12 14:14:10 crc kubenswrapper[5108]: I1212 14:14:10.916972 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 12 14:14:10 crc kubenswrapper[5108]: I1212 14:14:10.916981 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Dec 12 14:14:10 crc kubenswrapper[5108]: I1212 14:14:10.917647 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 12 14:14:10 crc kubenswrapper[5108]: I1212 14:14:10.917659 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 12 14:14:10 crc kubenswrapper[5108]: I1212 14:14:10.918586 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 12 14:14:10 crc kubenswrapper[5108]: I1212 14:14:10.918600 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 12 14:14:10 crc kubenswrapper[5108]: I1212 14:14:10.946658 5108 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 12 14:14:10 crc kubenswrapper[5108]: I1212 14:14:10.952955 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 14:14:10 crc kubenswrapper[5108]: I1212 14:14:10.992052 5108 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 14:14:10 crc kubenswrapper[5108]: E1212 14:14:10.992587 5108 kubelet.go:3342] "Failed creating a mirror pod" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.217:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 14:14:11 crc kubenswrapper[5108]: I1212 14:14:11.063622 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 14:14:11 crc kubenswrapper[5108]: I1212 14:14:11.063668 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 14:14:11 crc kubenswrapper[5108]: I1212 14:14:11.063684 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 14:14:11 crc kubenswrapper[5108]: I1212 14:14:11.063714 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 14:14:11 crc kubenswrapper[5108]: I1212 14:14:11.063739 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 14:14:11 crc kubenswrapper[5108]: I1212 14:14:11.063760 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 14:14:11 crc kubenswrapper[5108]: I1212 14:14:11.063913 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 14:14:11 crc kubenswrapper[5108]: I1212 14:14:11.063960 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 14:14:11 crc kubenswrapper[5108]: I1212 14:14:11.063995 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 14:14:11 crc kubenswrapper[5108]: I1212 14:14:11.064018 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 14:14:11 crc kubenswrapper[5108]: I1212 14:14:11.164965 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 14:14:11 crc kubenswrapper[5108]: I1212 14:14:11.165102 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 14:14:11 crc kubenswrapper[5108]: I1212 14:14:11.165247 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 14:14:11 crc kubenswrapper[5108]: I1212 14:14:11.165458 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 14:14:11 crc kubenswrapper[5108]: I1212 14:14:11.165542 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 14:14:11 crc kubenswrapper[5108]: I1212 14:14:11.165523 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 14:14:11 crc kubenswrapper[5108]: I1212 14:14:11.165806 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 14:14:11 crc kubenswrapper[5108]: I1212 14:14:11.165879 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 14:14:11 crc kubenswrapper[5108]: I1212 14:14:11.165900 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 14:14:11 crc kubenswrapper[5108]: I1212 14:14:11.165879 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 14:14:11 crc kubenswrapper[5108]: I1212 14:14:11.165999 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 14:14:11 crc kubenswrapper[5108]: I1212 14:14:11.166034 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 14:14:11 crc kubenswrapper[5108]: I1212 14:14:11.166070 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 14:14:11 crc kubenswrapper[5108]: I1212 14:14:11.166136 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 14:14:11 crc kubenswrapper[5108]: I1212 14:14:11.166167 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 14:14:11 crc kubenswrapper[5108]: I1212 14:14:11.166138 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 14:14:11 crc kubenswrapper[5108]: I1212 14:14:11.166218 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 14:14:11 crc kubenswrapper[5108]: I1212 14:14:11.166293 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 14:14:11 crc kubenswrapper[5108]: I1212 14:14:11.166644 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 14:14:11 crc kubenswrapper[5108]: I1212 14:14:11.166673 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 14:14:11 crc kubenswrapper[5108]: I1212 14:14:11.293715 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 14:14:11 crc kubenswrapper[5108]: E1212 14:14:11.323789 5108 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.217:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.18807d5374b7f52e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f7dbc7e1ee9c187a863ef9b473fad27b,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:14:11.32320491 +0000 UTC m=+204.231196069,LastTimestamp:2025-12-12 14:14:11.32320491 +0000 UTC m=+204.231196069,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:14:11 crc kubenswrapper[5108]: E1212 14:14:11.630387 5108 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.217:6443: connect: connection refused" Dec 12 14:14:11 crc kubenswrapper[5108]: E1212 14:14:11.631242 5108 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.217:6443: connect: connection refused" Dec 12 14:14:11 crc kubenswrapper[5108]: E1212 14:14:11.631762 5108 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.217:6443: connect: connection refused" Dec 12 14:14:11 crc kubenswrapper[5108]: E1212 14:14:11.632288 5108 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.217:6443: connect: connection refused" Dec 12 14:14:11 crc kubenswrapper[5108]: E1212 14:14:11.632630 5108 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.217:6443: connect: connection refused" Dec 12 14:14:11 crc kubenswrapper[5108]: I1212 14:14:11.632690 5108 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Dec 12 14:14:11 crc kubenswrapper[5108]: E1212 14:14:11.633348 5108 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.217:6443: connect: connection refused" interval="200ms" Dec 12 14:14:11 crc kubenswrapper[5108]: I1212 14:14:11.805137 5108 generic.go:358] "Generic (PLEG): container finished" podID="d58d8628-3aff-4f44-a12d-0e2df4f3ad87" containerID="248838e55535bdc59f6cb365c0b11ed34479d863fd8316ddb4e5b76c95bb9bfa" exitCode=0 Dec 12 14:14:11 crc kubenswrapper[5108]: I1212 14:14:11.805244 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"d58d8628-3aff-4f44-a12d-0e2df4f3ad87","Type":"ContainerDied","Data":"248838e55535bdc59f6cb365c0b11ed34479d863fd8316ddb4e5b76c95bb9bfa"} Dec 12 14:14:11 crc kubenswrapper[5108]: I1212 14:14:11.807196 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Dec 12 14:14:11 crc kubenswrapper[5108]: I1212 14:14:11.808368 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Dec 12 14:14:11 crc kubenswrapper[5108]: I1212 14:14:11.808938 5108 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="b22f9adfac1a559761b36302fddb52bf8ecdbacd43582fa5635b709cd8d6dd18" exitCode=0 Dec 12 14:14:11 crc kubenswrapper[5108]: I1212 14:14:11.808960 5108 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="9d9ecc493821faa4c2f5a082084284166d20dc85fb2afd25d3e8014cb324543f" exitCode=0 Dec 12 14:14:11 crc kubenswrapper[5108]: I1212 14:14:11.808967 5108 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="a68f2bbf194146ec111e9aa5753f597962de280e957a0a3ef23dd79173160003" exitCode=0 Dec 12 14:14:11 crc kubenswrapper[5108]: I1212 14:14:11.808974 5108 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="d58b2258046b38d981e06423ad8bc68ff6163533de83a36d6beda29bc02b1da8" exitCode=2 Dec 12 14:14:11 crc kubenswrapper[5108]: I1212 14:14:11.809026 5108 scope.go:117] "RemoveContainer" containerID="efef1b5c827deceb497f177cb13ac091fbaccfce00faad7e3daee74ab981b6b9" Dec 12 14:14:11 crc kubenswrapper[5108]: I1212 14:14:11.810703 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f7dbc7e1ee9c187a863ef9b473fad27b","Type":"ContainerStarted","Data":"db171e7a51684b27d84082f1717be668c870b74c1aaa23e610bd48bc230ff78a"} Dec 12 14:14:11 crc kubenswrapper[5108]: I1212 14:14:11.810754 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f7dbc7e1ee9c187a863ef9b473fad27b","Type":"ContainerStarted","Data":"b2e98a93ffec7055be51cff3eeeda4e772df61893bb2f557aa8d7e77dc10ac2b"} Dec 12 14:14:11 crc kubenswrapper[5108]: I1212 14:14:11.811019 5108 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 14:14:11 crc kubenswrapper[5108]: E1212 14:14:11.811514 5108 kubelet.go:3342] "Failed creating a mirror pod" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.217:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 14:14:11 crc kubenswrapper[5108]: E1212 14:14:11.834467 5108 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.217:6443: connect: connection refused" interval="400ms" Dec 12 14:14:12 crc kubenswrapper[5108]: E1212 14:14:12.235867 5108 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.217:6443: connect: connection refused" interval="800ms" Dec 12 14:14:12 crc kubenswrapper[5108]: I1212 14:14:12.821304 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Dec 12 14:14:13 crc kubenswrapper[5108]: E1212 14:14:13.041351 5108 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.217:6443: connect: connection refused" interval="1.6s" Dec 12 14:14:13 crc kubenswrapper[5108]: I1212 14:14:13.091788 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Dec 12 14:14:13 crc kubenswrapper[5108]: I1212 14:14:13.200588 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d58d8628-3aff-4f44-a12d-0e2df4f3ad87-kube-api-access\") pod \"d58d8628-3aff-4f44-a12d-0e2df4f3ad87\" (UID: \"d58d8628-3aff-4f44-a12d-0e2df4f3ad87\") " Dec 12 14:14:13 crc kubenswrapper[5108]: I1212 14:14:13.200856 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d58d8628-3aff-4f44-a12d-0e2df4f3ad87-var-lock\") pod \"d58d8628-3aff-4f44-a12d-0e2df4f3ad87\" (UID: \"d58d8628-3aff-4f44-a12d-0e2df4f3ad87\") " Dec 12 14:14:13 crc kubenswrapper[5108]: I1212 14:14:13.200874 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d58d8628-3aff-4f44-a12d-0e2df4f3ad87-kubelet-dir\") pod \"d58d8628-3aff-4f44-a12d-0e2df4f3ad87\" (UID: \"d58d8628-3aff-4f44-a12d-0e2df4f3ad87\") " Dec 12 14:14:13 crc kubenswrapper[5108]: I1212 14:14:13.200973 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d58d8628-3aff-4f44-a12d-0e2df4f3ad87-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "d58d8628-3aff-4f44-a12d-0e2df4f3ad87" (UID: "d58d8628-3aff-4f44-a12d-0e2df4f3ad87"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 14:14:13 crc kubenswrapper[5108]: I1212 14:14:13.201024 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d58d8628-3aff-4f44-a12d-0e2df4f3ad87-var-lock" (OuterVolumeSpecName: "var-lock") pod "d58d8628-3aff-4f44-a12d-0e2df4f3ad87" (UID: "d58d8628-3aff-4f44-a12d-0e2df4f3ad87"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 14:14:13 crc kubenswrapper[5108]: I1212 14:14:13.201228 5108 reconciler_common.go:299] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d58d8628-3aff-4f44-a12d-0e2df4f3ad87-var-lock\") on node \"crc\" DevicePath \"\"" Dec 12 14:14:13 crc kubenswrapper[5108]: I1212 14:14:13.201250 5108 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d58d8628-3aff-4f44-a12d-0e2df4f3ad87-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 12 14:14:13 crc kubenswrapper[5108]: I1212 14:14:13.205773 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d58d8628-3aff-4f44-a12d-0e2df4f3ad87-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "d58d8628-3aff-4f44-a12d-0e2df4f3ad87" (UID: "d58d8628-3aff-4f44-a12d-0e2df4f3ad87"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:14:13 crc kubenswrapper[5108]: I1212 14:14:13.299774 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Dec 12 14:14:13 crc kubenswrapper[5108]: I1212 14:14:13.300518 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 14:14:13 crc kubenswrapper[5108]: I1212 14:14:13.302090 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d58d8628-3aff-4f44-a12d-0e2df4f3ad87-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 12 14:14:13 crc kubenswrapper[5108]: I1212 14:14:13.402965 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Dec 12 14:14:13 crc kubenswrapper[5108]: I1212 14:14:13.403232 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Dec 12 14:14:13 crc kubenswrapper[5108]: I1212 14:14:13.403263 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Dec 12 14:14:13 crc kubenswrapper[5108]: I1212 14:14:13.403310 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Dec 12 14:14:13 crc kubenswrapper[5108]: I1212 14:14:13.403346 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Dec 12 14:14:13 crc kubenswrapper[5108]: I1212 14:14:13.403410 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 14:14:13 crc kubenswrapper[5108]: I1212 14:14:13.403465 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 14:14:13 crc kubenswrapper[5108]: I1212 14:14:13.403532 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 14:14:13 crc kubenswrapper[5108]: I1212 14:14:13.403645 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir" (OuterVolumeSpecName: "ca-bundle-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "ca-bundle-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:14:13 crc kubenswrapper[5108]: I1212 14:14:13.403886 5108 reconciler_common.go:299] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") on node \"crc\" DevicePath \"\"" Dec 12 14:14:13 crc kubenswrapper[5108]: I1212 14:14:13.403909 5108 reconciler_common.go:299] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 12 14:14:13 crc kubenswrapper[5108]: I1212 14:14:13.403921 5108 reconciler_common.go:299] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 12 14:14:13 crc kubenswrapper[5108]: I1212 14:14:13.403935 5108 reconciler_common.go:299] "Volume detached for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") on node \"crc\" DevicePath \"\"" Dec 12 14:14:13 crc kubenswrapper[5108]: I1212 14:14:13.408182 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:14:13 crc kubenswrapper[5108]: I1212 14:14:13.418299 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a14caf222afb62aaabdc47808b6f944" path="/var/lib/kubelet/pods/3a14caf222afb62aaabdc47808b6f944/volumes" Dec 12 14:14:13 crc kubenswrapper[5108]: I1212 14:14:13.505662 5108 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") on node \"crc\" DevicePath \"\"" Dec 12 14:14:13 crc kubenswrapper[5108]: I1212 14:14:13.834129 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"d58d8628-3aff-4f44-a12d-0e2df4f3ad87","Type":"ContainerDied","Data":"72a3b23f86f0af6fac3a768a8fd5fe5ded18a239bab65858e7649ff0ea293c31"} Dec 12 14:14:13 crc kubenswrapper[5108]: I1212 14:14:13.834187 5108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="72a3b23f86f0af6fac3a768a8fd5fe5ded18a239bab65858e7649ff0ea293c31" Dec 12 14:14:13 crc kubenswrapper[5108]: I1212 14:14:13.834550 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Dec 12 14:14:13 crc kubenswrapper[5108]: I1212 14:14:13.839526 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Dec 12 14:14:13 crc kubenswrapper[5108]: I1212 14:14:13.840597 5108 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="c86b90b7193b7901a601e3c65d607b6a5d462bb1a15a5010bf19e9c6d6036966" exitCode=0 Dec 12 14:14:13 crc kubenswrapper[5108]: I1212 14:14:13.840696 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 14:14:13 crc kubenswrapper[5108]: I1212 14:14:13.840758 5108 scope.go:117] "RemoveContainer" containerID="b22f9adfac1a559761b36302fddb52bf8ecdbacd43582fa5635b709cd8d6dd18" Dec 12 14:14:13 crc kubenswrapper[5108]: I1212 14:14:13.862057 5108 scope.go:117] "RemoveContainer" containerID="9d9ecc493821faa4c2f5a082084284166d20dc85fb2afd25d3e8014cb324543f" Dec 12 14:14:13 crc kubenswrapper[5108]: I1212 14:14:13.879876 5108 scope.go:117] "RemoveContainer" containerID="a68f2bbf194146ec111e9aa5753f597962de280e957a0a3ef23dd79173160003" Dec 12 14:14:13 crc kubenswrapper[5108]: I1212 14:14:13.908748 5108 scope.go:117] "RemoveContainer" containerID="d58b2258046b38d981e06423ad8bc68ff6163533de83a36d6beda29bc02b1da8" Dec 12 14:14:13 crc kubenswrapper[5108]: I1212 14:14:13.928671 5108 scope.go:117] "RemoveContainer" containerID="c86b90b7193b7901a601e3c65d607b6a5d462bb1a15a5010bf19e9c6d6036966" Dec 12 14:14:13 crc kubenswrapper[5108]: I1212 14:14:13.958709 5108 scope.go:117] "RemoveContainer" containerID="e70ff1b9b3269f3fe2eabb3406b5c246c2bf71aebcb12730e80991aedd1f8ece" Dec 12 14:14:14 crc kubenswrapper[5108]: I1212 14:14:14.013787 5108 scope.go:117] "RemoveContainer" containerID="b22f9adfac1a559761b36302fddb52bf8ecdbacd43582fa5635b709cd8d6dd18" Dec 12 14:14:14 crc kubenswrapper[5108]: E1212 14:14:14.014375 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b22f9adfac1a559761b36302fddb52bf8ecdbacd43582fa5635b709cd8d6dd18\": container with ID starting with b22f9adfac1a559761b36302fddb52bf8ecdbacd43582fa5635b709cd8d6dd18 not found: ID does not exist" containerID="b22f9adfac1a559761b36302fddb52bf8ecdbacd43582fa5635b709cd8d6dd18" Dec 12 14:14:14 crc kubenswrapper[5108]: I1212 14:14:14.014462 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b22f9adfac1a559761b36302fddb52bf8ecdbacd43582fa5635b709cd8d6dd18"} err="failed to get container status \"b22f9adfac1a559761b36302fddb52bf8ecdbacd43582fa5635b709cd8d6dd18\": rpc error: code = NotFound desc = could not find container \"b22f9adfac1a559761b36302fddb52bf8ecdbacd43582fa5635b709cd8d6dd18\": container with ID starting with b22f9adfac1a559761b36302fddb52bf8ecdbacd43582fa5635b709cd8d6dd18 not found: ID does not exist" Dec 12 14:14:14 crc kubenswrapper[5108]: I1212 14:14:14.014563 5108 scope.go:117] "RemoveContainer" containerID="9d9ecc493821faa4c2f5a082084284166d20dc85fb2afd25d3e8014cb324543f" Dec 12 14:14:14 crc kubenswrapper[5108]: E1212 14:14:14.015032 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d9ecc493821faa4c2f5a082084284166d20dc85fb2afd25d3e8014cb324543f\": container with ID starting with 9d9ecc493821faa4c2f5a082084284166d20dc85fb2afd25d3e8014cb324543f not found: ID does not exist" containerID="9d9ecc493821faa4c2f5a082084284166d20dc85fb2afd25d3e8014cb324543f" Dec 12 14:14:14 crc kubenswrapper[5108]: I1212 14:14:14.015121 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d9ecc493821faa4c2f5a082084284166d20dc85fb2afd25d3e8014cb324543f"} err="failed to get container status \"9d9ecc493821faa4c2f5a082084284166d20dc85fb2afd25d3e8014cb324543f\": rpc error: code = NotFound desc = could not find container \"9d9ecc493821faa4c2f5a082084284166d20dc85fb2afd25d3e8014cb324543f\": container with ID starting with 9d9ecc493821faa4c2f5a082084284166d20dc85fb2afd25d3e8014cb324543f not found: ID does not exist" Dec 12 14:14:14 crc kubenswrapper[5108]: I1212 14:14:14.015139 5108 scope.go:117] "RemoveContainer" containerID="a68f2bbf194146ec111e9aa5753f597962de280e957a0a3ef23dd79173160003" Dec 12 14:14:14 crc kubenswrapper[5108]: E1212 14:14:14.015555 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a68f2bbf194146ec111e9aa5753f597962de280e957a0a3ef23dd79173160003\": container with ID starting with a68f2bbf194146ec111e9aa5753f597962de280e957a0a3ef23dd79173160003 not found: ID does not exist" containerID="a68f2bbf194146ec111e9aa5753f597962de280e957a0a3ef23dd79173160003" Dec 12 14:14:14 crc kubenswrapper[5108]: I1212 14:14:14.015593 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a68f2bbf194146ec111e9aa5753f597962de280e957a0a3ef23dd79173160003"} err="failed to get container status \"a68f2bbf194146ec111e9aa5753f597962de280e957a0a3ef23dd79173160003\": rpc error: code = NotFound desc = could not find container \"a68f2bbf194146ec111e9aa5753f597962de280e957a0a3ef23dd79173160003\": container with ID starting with a68f2bbf194146ec111e9aa5753f597962de280e957a0a3ef23dd79173160003 not found: ID does not exist" Dec 12 14:14:14 crc kubenswrapper[5108]: I1212 14:14:14.015622 5108 scope.go:117] "RemoveContainer" containerID="d58b2258046b38d981e06423ad8bc68ff6163533de83a36d6beda29bc02b1da8" Dec 12 14:14:14 crc kubenswrapper[5108]: E1212 14:14:14.016157 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d58b2258046b38d981e06423ad8bc68ff6163533de83a36d6beda29bc02b1da8\": container with ID starting with d58b2258046b38d981e06423ad8bc68ff6163533de83a36d6beda29bc02b1da8 not found: ID does not exist" containerID="d58b2258046b38d981e06423ad8bc68ff6163533de83a36d6beda29bc02b1da8" Dec 12 14:14:14 crc kubenswrapper[5108]: I1212 14:14:14.016245 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d58b2258046b38d981e06423ad8bc68ff6163533de83a36d6beda29bc02b1da8"} err="failed to get container status \"d58b2258046b38d981e06423ad8bc68ff6163533de83a36d6beda29bc02b1da8\": rpc error: code = NotFound desc = could not find container \"d58b2258046b38d981e06423ad8bc68ff6163533de83a36d6beda29bc02b1da8\": container with ID starting with d58b2258046b38d981e06423ad8bc68ff6163533de83a36d6beda29bc02b1da8 not found: ID does not exist" Dec 12 14:14:14 crc kubenswrapper[5108]: I1212 14:14:14.016272 5108 scope.go:117] "RemoveContainer" containerID="c86b90b7193b7901a601e3c65d607b6a5d462bb1a15a5010bf19e9c6d6036966" Dec 12 14:14:14 crc kubenswrapper[5108]: E1212 14:14:14.016729 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c86b90b7193b7901a601e3c65d607b6a5d462bb1a15a5010bf19e9c6d6036966\": container with ID starting with c86b90b7193b7901a601e3c65d607b6a5d462bb1a15a5010bf19e9c6d6036966 not found: ID does not exist" containerID="c86b90b7193b7901a601e3c65d607b6a5d462bb1a15a5010bf19e9c6d6036966" Dec 12 14:14:14 crc kubenswrapper[5108]: I1212 14:14:14.016756 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c86b90b7193b7901a601e3c65d607b6a5d462bb1a15a5010bf19e9c6d6036966"} err="failed to get container status \"c86b90b7193b7901a601e3c65d607b6a5d462bb1a15a5010bf19e9c6d6036966\": rpc error: code = NotFound desc = could not find container \"c86b90b7193b7901a601e3c65d607b6a5d462bb1a15a5010bf19e9c6d6036966\": container with ID starting with c86b90b7193b7901a601e3c65d607b6a5d462bb1a15a5010bf19e9c6d6036966 not found: ID does not exist" Dec 12 14:14:14 crc kubenswrapper[5108]: I1212 14:14:14.016777 5108 scope.go:117] "RemoveContainer" containerID="e70ff1b9b3269f3fe2eabb3406b5c246c2bf71aebcb12730e80991aedd1f8ece" Dec 12 14:14:14 crc kubenswrapper[5108]: E1212 14:14:14.017048 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e70ff1b9b3269f3fe2eabb3406b5c246c2bf71aebcb12730e80991aedd1f8ece\": container with ID starting with e70ff1b9b3269f3fe2eabb3406b5c246c2bf71aebcb12730e80991aedd1f8ece not found: ID does not exist" containerID="e70ff1b9b3269f3fe2eabb3406b5c246c2bf71aebcb12730e80991aedd1f8ece" Dec 12 14:14:14 crc kubenswrapper[5108]: I1212 14:14:14.017149 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e70ff1b9b3269f3fe2eabb3406b5c246c2bf71aebcb12730e80991aedd1f8ece"} err="failed to get container status \"e70ff1b9b3269f3fe2eabb3406b5c246c2bf71aebcb12730e80991aedd1f8ece\": rpc error: code = NotFound desc = could not find container \"e70ff1b9b3269f3fe2eabb3406b5c246c2bf71aebcb12730e80991aedd1f8ece\": container with ID starting with e70ff1b9b3269f3fe2eabb3406b5c246c2bf71aebcb12730e80991aedd1f8ece not found: ID does not exist" Dec 12 14:14:14 crc kubenswrapper[5108]: E1212 14:14:14.642797 5108 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.217:6443: connect: connection refused" interval="3.2s" Dec 12 14:14:15 crc kubenswrapper[5108]: I1212 14:14:15.956104 5108 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.217:6443: connect: connection refused" Dec 12 14:14:15 crc kubenswrapper[5108]: I1212 14:14:15.956757 5108 status_manager.go:895] "Failed to get status for pod" podUID="d58d8628-3aff-4f44-a12d-0e2df4f3ad87" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.217:6443: connect: connection refused" Dec 12 14:14:17 crc kubenswrapper[5108]: I1212 14:14:17.412067 5108 status_manager.go:895] "Failed to get status for pod" podUID="d58d8628-3aff-4f44-a12d-0e2df4f3ad87" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.217:6443: connect: connection refused" Dec 12 14:14:17 crc kubenswrapper[5108]: E1212 14:14:17.844213 5108 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.217:6443: connect: connection refused" interval="6.4s" Dec 12 14:14:18 crc kubenswrapper[5108]: E1212 14:14:18.967367 5108 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.217:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.18807d5374b7f52e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f7dbc7e1ee9c187a863ef9b473fad27b,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:14:11.32320491 +0000 UTC m=+204.231196069,LastTimestamp:2025-12-12 14:14:11.32320491 +0000 UTC m=+204.231196069,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:14:21 crc kubenswrapper[5108]: I1212 14:14:21.406834 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 14:14:21 crc kubenswrapper[5108]: I1212 14:14:21.407925 5108 status_manager.go:895] "Failed to get status for pod" podUID="d58d8628-3aff-4f44-a12d-0e2df4f3ad87" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.217:6443: connect: connection refused" Dec 12 14:14:21 crc kubenswrapper[5108]: I1212 14:14:21.424735 5108 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c83071b0-146c-4768-adbb-21a30f71994e" Dec 12 14:14:21 crc kubenswrapper[5108]: I1212 14:14:21.424766 5108 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c83071b0-146c-4768-adbb-21a30f71994e" Dec 12 14:14:21 crc kubenswrapper[5108]: E1212 14:14:21.425204 5108 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.217:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 14:14:21 crc kubenswrapper[5108]: I1212 14:14:21.425497 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 14:14:21 crc kubenswrapper[5108]: I1212 14:14:21.897579 5108 generic.go:358] "Generic (PLEG): container finished" podID="57755cc5f99000cc11e193051474d4e2" containerID="1857404df094d69960fa35f837dbc9212be859a58214f22f34f61adc5a3cf1e6" exitCode=0 Dec 12 14:14:21 crc kubenswrapper[5108]: I1212 14:14:21.897655 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerDied","Data":"1857404df094d69960fa35f837dbc9212be859a58214f22f34f61adc5a3cf1e6"} Dec 12 14:14:21 crc kubenswrapper[5108]: I1212 14:14:21.897945 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"af1917ef6c0cb732c5f701adc2cab69b9f9d09b98aba2f660a7991c90013e41e"} Dec 12 14:14:21 crc kubenswrapper[5108]: I1212 14:14:21.898351 5108 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c83071b0-146c-4768-adbb-21a30f71994e" Dec 12 14:14:21 crc kubenswrapper[5108]: I1212 14:14:21.898378 5108 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c83071b0-146c-4768-adbb-21a30f71994e" Dec 12 14:14:21 crc kubenswrapper[5108]: I1212 14:14:21.898756 5108 status_manager.go:895] "Failed to get status for pod" podUID="d58d8628-3aff-4f44-a12d-0e2df4f3ad87" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.217:6443: connect: connection refused" Dec 12 14:14:21 crc kubenswrapper[5108]: E1212 14:14:21.898861 5108 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.217:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 14:14:21 crc kubenswrapper[5108]: E1212 14:14:21.915963 5108 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T14:14:21Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T14:14:21Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T14:14:21Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T14:14:21Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.217:6443: connect: connection refused" Dec 12 14:14:21 crc kubenswrapper[5108]: E1212 14:14:21.916380 5108 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.217:6443: connect: connection refused" Dec 12 14:14:21 crc kubenswrapper[5108]: E1212 14:14:21.916630 5108 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.217:6443: connect: connection refused" Dec 12 14:14:21 crc kubenswrapper[5108]: E1212 14:14:21.916882 5108 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.217:6443: connect: connection refused" Dec 12 14:14:21 crc kubenswrapper[5108]: E1212 14:14:21.917234 5108 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.217:6443: connect: connection refused" Dec 12 14:14:21 crc kubenswrapper[5108]: E1212 14:14:21.917251 5108 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Dec 12 14:14:22 crc kubenswrapper[5108]: I1212 14:14:22.915962 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"470009550b71c91114bef152d4b7a1795930528ef471427debeb93b475a8034c"} Dec 12 14:14:22 crc kubenswrapper[5108]: I1212 14:14:22.917297 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"4a7002304d04c2af609397e89072b0fb635cc1f9860a77bdc4631a94030418d2"} Dec 12 14:14:22 crc kubenswrapper[5108]: I1212 14:14:22.917381 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"fb1593954c0d85ce222c13c402a1120d6777e75c674b0a01323fc466458345c6"} Dec 12 14:14:22 crc kubenswrapper[5108]: I1212 14:14:22.917458 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"e07bdc23e7f0b1216cf547a4960ead30030aa3ff07d41437efdba3e681f773df"} Dec 12 14:14:23 crc kubenswrapper[5108]: I1212 14:14:23.922891 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"f05b60b6e4ef2fcbc536be4fcbf60ebd2db9a7d951c6e408685d6c786de7029f"} Dec 12 14:14:23 crc kubenswrapper[5108]: I1212 14:14:23.923264 5108 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c83071b0-146c-4768-adbb-21a30f71994e" Dec 12 14:14:23 crc kubenswrapper[5108]: I1212 14:14:23.923279 5108 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c83071b0-146c-4768-adbb-21a30f71994e" Dec 12 14:14:23 crc kubenswrapper[5108]: I1212 14:14:23.923501 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 14:14:25 crc kubenswrapper[5108]: I1212 14:14:25.937001 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 12 14:14:25 crc kubenswrapper[5108]: I1212 14:14:25.937299 5108 generic.go:358] "Generic (PLEG): container finished" podID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerID="3071f4069d4990c1608adf401cdef4fd74f2c0fa8bfa273c3449a82eb8145bd1" exitCode=1 Dec 12 14:14:25 crc kubenswrapper[5108]: I1212 14:14:25.937380 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerDied","Data":"3071f4069d4990c1608adf401cdef4fd74f2c0fa8bfa273c3449a82eb8145bd1"} Dec 12 14:14:25 crc kubenswrapper[5108]: I1212 14:14:25.937915 5108 scope.go:117] "RemoveContainer" containerID="3071f4069d4990c1608adf401cdef4fd74f2c0fa8bfa273c3449a82eb8145bd1" Dec 12 14:14:26 crc kubenswrapper[5108]: I1212 14:14:26.185247 5108 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 14:14:26 crc kubenswrapper[5108]: I1212 14:14:26.426806 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 14:14:26 crc kubenswrapper[5108]: I1212 14:14:26.427334 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 14:14:26 crc kubenswrapper[5108]: I1212 14:14:26.438130 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 14:14:26 crc kubenswrapper[5108]: I1212 14:14:26.951038 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 12 14:14:26 crc kubenswrapper[5108]: I1212 14:14:26.951364 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"243d8d984f6a47df4456767e95a62dae1c82289d5a69755c73b614103e41fc16"} Dec 12 14:14:28 crc kubenswrapper[5108]: I1212 14:14:28.936163 5108 kubelet.go:3329] "Deleted mirror pod as it didn't match the static Pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 14:14:28 crc kubenswrapper[5108]: I1212 14:14:28.936190 5108 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 14:14:28 crc kubenswrapper[5108]: I1212 14:14:28.983435 5108 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c83071b0-146c-4768-adbb-21a30f71994e" Dec 12 14:14:28 crc kubenswrapper[5108]: I1212 14:14:28.983567 5108 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c83071b0-146c-4768-adbb-21a30f71994e" Dec 12 14:14:28 crc kubenswrapper[5108]: I1212 14:14:28.988295 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 14:14:29 crc kubenswrapper[5108]: I1212 14:14:29.013242 5108 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="57755cc5f99000cc11e193051474d4e2" podUID="7382aee1-62f9-463b-b971-4a853d601e19" Dec 12 14:14:29 crc kubenswrapper[5108]: I1212 14:14:29.980243 5108 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c83071b0-146c-4768-adbb-21a30f71994e" Dec 12 14:14:29 crc kubenswrapper[5108]: I1212 14:14:29.980276 5108 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c83071b0-146c-4768-adbb-21a30f71994e" Dec 12 14:14:29 crc kubenswrapper[5108]: I1212 14:14:29.984148 5108 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="57755cc5f99000cc11e193051474d4e2" podUID="7382aee1-62f9-463b-b971-4a853d601e19" Dec 12 14:14:33 crc kubenswrapper[5108]: I1212 14:14:33.207049 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 14:14:33 crc kubenswrapper[5108]: I1212 14:14:33.207216 5108 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Dec 12 14:14:33 crc kubenswrapper[5108]: I1212 14:14:33.207553 5108 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Dec 12 14:14:34 crc kubenswrapper[5108]: I1212 14:14:34.482857 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 14:14:35 crc kubenswrapper[5108]: I1212 14:14:35.278137 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"openshift-service-ca.crt\"" Dec 12 14:14:35 crc kubenswrapper[5108]: I1212 14:14:35.348306 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-stats-default\"" Dec 12 14:14:35 crc kubenswrapper[5108]: I1212 14:14:35.498955 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"etcd-serving-ca\"" Dec 12 14:14:35 crc kubenswrapper[5108]: I1212 14:14:35.547338 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-kpvmz\"" Dec 12 14:14:35 crc kubenswrapper[5108]: I1212 14:14:35.547339 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-service-ca.crt\"" Dec 12 14:14:35 crc kubenswrapper[5108]: I1212 14:14:35.593656 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-controller-dockercfg-xnj77\"" Dec 12 14:14:35 crc kubenswrapper[5108]: I1212 14:14:35.903754 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-dockercfg-gnx66\"" Dec 12 14:14:36 crc kubenswrapper[5108]: I1212 14:14:36.135869 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"serving-cert\"" Dec 12 14:14:36 crc kubenswrapper[5108]: I1212 14:14:36.167267 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-dockercfg-2cfkp\"" Dec 12 14:14:36 crc kubenswrapper[5108]: I1212 14:14:36.241520 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"trusted-ca\"" Dec 12 14:14:36 crc kubenswrapper[5108]: I1212 14:14:36.427016 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-serving-ca\"" Dec 12 14:14:36 crc kubenswrapper[5108]: I1212 14:14:36.465008 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"openshift-service-ca.crt\"" Dec 12 14:14:36 crc kubenswrapper[5108]: I1212 14:14:36.526251 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-provider-selection\"" Dec 12 14:14:36 crc kubenswrapper[5108]: I1212 14:14:36.812681 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\"" Dec 12 14:14:36 crc kubenswrapper[5108]: I1212 14:14:36.939963 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"serving-cert\"" Dec 12 14:14:36 crc kubenswrapper[5108]: I1212 14:14:36.957371 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-djmfg\"" Dec 12 14:14:37 crc kubenswrapper[5108]: I1212 14:14:37.014328 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-serving-cert\"" Dec 12 14:14:37 crc kubenswrapper[5108]: I1212 14:14:37.136316 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-root-ca.crt\"" Dec 12 14:14:37 crc kubenswrapper[5108]: I1212 14:14:37.152892 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"kube-root-ca.crt\"" Dec 12 14:14:37 crc kubenswrapper[5108]: I1212 14:14:37.252620 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-operator\"/\"metrics-tls\"" Dec 12 14:14:37 crc kubenswrapper[5108]: I1212 14:14:37.302920 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"service-ca-bundle\"" Dec 12 14:14:37 crc kubenswrapper[5108]: I1212 14:14:37.344598 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"openshift-service-ca.crt\"" Dec 12 14:14:37 crc kubenswrapper[5108]: I1212 14:14:37.483530 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-metrics-certs-default\"" Dec 12 14:14:37 crc kubenswrapper[5108]: I1212 14:14:37.534285 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"env-overrides\"" Dec 12 14:14:37 crc kubenswrapper[5108]: I1212 14:14:37.729008 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"service-ca-dockercfg-bgxvm\"" Dec 12 14:14:38 crc kubenswrapper[5108]: I1212 14:14:38.088993 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"ovnkube-identity-cm\"" Dec 12 14:14:38 crc kubenswrapper[5108]: I1212 14:14:38.166821 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"oauth-serving-cert\"" Dec 12 14:14:38 crc kubenswrapper[5108]: I1212 14:14:38.463184 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"kube-root-ca.crt\"" Dec 12 14:14:38 crc kubenswrapper[5108]: I1212 14:14:38.682306 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"default-dockercfg-hqpm5\"" Dec 12 14:14:38 crc kubenswrapper[5108]: I1212 14:14:38.782596 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-dockercfg-bf7fj\"" Dec 12 14:14:38 crc kubenswrapper[5108]: I1212 14:14:38.884741 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"openshift-service-ca.crt\"" Dec 12 14:14:38 crc kubenswrapper[5108]: I1212 14:14:38.898348 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"openshift-service-ca.crt\"" Dec 12 14:14:39 crc kubenswrapper[5108]: I1212 14:14:39.102521 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-client\"" Dec 12 14:14:39 crc kubenswrapper[5108]: I1212 14:14:39.152059 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mcc-proxy-tls\"" Dec 12 14:14:39 crc kubenswrapper[5108]: I1212 14:14:39.174825 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"audit-1\"" Dec 12 14:14:39 crc kubenswrapper[5108]: I1212 14:14:39.177106 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-config\"" Dec 12 14:14:39 crc kubenswrapper[5108]: I1212 14:14:39.263426 5108 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Dec 12 14:14:39 crc kubenswrapper[5108]: I1212 14:14:39.327036 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-sa-dockercfg-wzhvk\"" Dec 12 14:14:39 crc kubenswrapper[5108]: I1212 14:14:39.382171 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\"" Dec 12 14:14:39 crc kubenswrapper[5108]: I1212 14:14:39.446227 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-copy-resources\"" Dec 12 14:14:39 crc kubenswrapper[5108]: I1212 14:14:39.580843 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-cliconfig\"" Dec 12 14:14:39 crc kubenswrapper[5108]: I1212 14:14:39.760493 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"metrics-tls\"" Dec 12 14:14:39 crc kubenswrapper[5108]: I1212 14:14:39.852305 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-certs-default\"" Dec 12 14:14:39 crc kubenswrapper[5108]: I1212 14:14:39.880309 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"kube-root-ca.crt\"" Dec 12 14:14:39 crc kubenswrapper[5108]: I1212 14:14:39.943889 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"audit\"" Dec 12 14:14:39 crc kubenswrapper[5108]: I1212 14:14:39.995831 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-dockercfg-tnfx9\"" Dec 12 14:14:40 crc kubenswrapper[5108]: I1212 14:14:40.286203 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"openshift-service-ca.crt\"" Dec 12 14:14:40 crc kubenswrapper[5108]: I1212 14:14:40.542784 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"kube-root-ca.crt\"" Dec 12 14:14:40 crc kubenswrapper[5108]: I1212 14:14:40.611491 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"proxy-tls\"" Dec 12 14:14:40 crc kubenswrapper[5108]: I1212 14:14:40.631376 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"openshift-service-ca.crt\"" Dec 12 14:14:40 crc kubenswrapper[5108]: I1212 14:14:40.721836 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-service-ca.crt\"" Dec 12 14:14:40 crc kubenswrapper[5108]: I1212 14:14:40.726263 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"trusted-ca-bundle\"" Dec 12 14:14:41 crc kubenswrapper[5108]: I1212 14:14:41.118268 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"openshift-service-ca.crt\"" Dec 12 14:14:41 crc kubenswrapper[5108]: I1212 14:14:41.131058 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-config\"" Dec 12 14:14:41 crc kubenswrapper[5108]: I1212 14:14:41.215809 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"openshift-service-ca.crt\"" Dec 12 14:14:41 crc kubenswrapper[5108]: I1212 14:14:41.234052 5108 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Dec 12 14:14:41 crc kubenswrapper[5108]: I1212 14:14:41.247712 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-script\"" Dec 12 14:14:41 crc kubenswrapper[5108]: I1212 14:14:41.279766 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"openshift-service-ca.crt\"" Dec 12 14:14:41 crc kubenswrapper[5108]: I1212 14:14:41.288128 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"cluster-version-operator-serving-cert\"" Dec 12 14:14:41 crc kubenswrapper[5108]: I1212 14:14:41.302162 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-client\"" Dec 12 14:14:41 crc kubenswrapper[5108]: I1212 14:14:41.344122 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"kube-root-ca.crt\"" Dec 12 14:14:41 crc kubenswrapper[5108]: I1212 14:14:41.440198 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-operator-tls\"" Dec 12 14:14:41 crc kubenswrapper[5108]: I1212 14:14:41.490357 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"marketplace-trusted-ca\"" Dec 12 14:14:41 crc kubenswrapper[5108]: I1212 14:14:41.493775 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-control-plane-metrics-cert\"" Dec 12 14:14:41 crc kubenswrapper[5108]: I1212 14:14:41.513477 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"serving-cert\"" Dec 12 14:14:41 crc kubenswrapper[5108]: I1212 14:14:41.699845 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"catalog-operator-serving-cert\"" Dec 12 14:14:41 crc kubenswrapper[5108]: I1212 14:14:41.821261 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-console\"/\"networking-console-plugin\"" Dec 12 14:14:41 crc kubenswrapper[5108]: I1212 14:14:41.932333 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"oauth-apiserver-sa-dockercfg-qqw4z\"" Dec 12 14:14:41 crc kubenswrapper[5108]: I1212 14:14:41.943430 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-config\"" Dec 12 14:14:41 crc kubenswrapper[5108]: I1212 14:14:41.959517 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"openshift-service-ca.crt\"" Dec 12 14:14:42 crc kubenswrapper[5108]: I1212 14:14:42.054564 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"config\"" Dec 12 14:14:42 crc kubenswrapper[5108]: I1212 14:14:42.278637 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-idp-0-file-data\"" Dec 12 14:14:42 crc kubenswrapper[5108]: I1212 14:14:42.294121 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-serving-cert\"" Dec 12 14:14:42 crc kubenswrapper[5108]: I1212 14:14:42.346351 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"signing-cabundle\"" Dec 12 14:14:42 crc kubenswrapper[5108]: I1212 14:14:42.347847 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-rbac-proxy\"" Dec 12 14:14:42 crc kubenswrapper[5108]: I1212 14:14:42.442283 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"kube-root-ca.crt\"" Dec 12 14:14:42 crc kubenswrapper[5108]: I1212 14:14:42.566267 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"openshift-service-ca.crt\"" Dec 12 14:14:42 crc kubenswrapper[5108]: I1212 14:14:42.614525 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"console-config\"" Dec 12 14:14:42 crc kubenswrapper[5108]: I1212 14:14:42.643681 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"installation-pull-secrets\"" Dec 12 14:14:42 crc kubenswrapper[5108]: I1212 14:14:42.833514 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"image-import-ca\"" Dec 12 14:14:43 crc kubenswrapper[5108]: I1212 14:14:43.097512 5108 ???:1] "http: TLS handshake error from 192.168.126.11:32924: no serving certificate available for the kubelet" Dec 12 14:14:43 crc kubenswrapper[5108]: I1212 14:14:43.192265 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-dockercfg-kw8fx\"" Dec 12 14:14:43 crc kubenswrapper[5108]: I1212 14:14:43.207197 5108 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Dec 12 14:14:43 crc kubenswrapper[5108]: I1212 14:14:43.207298 5108 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Dec 12 14:14:43 crc kubenswrapper[5108]: I1212 14:14:43.308687 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-metrics\"" Dec 12 14:14:43 crc kubenswrapper[5108]: I1212 14:14:43.336339 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"serving-cert\"" Dec 12 14:14:43 crc kubenswrapper[5108]: I1212 14:14:43.504542 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"kube-root-ca.crt\"" Dec 12 14:14:43 crc kubenswrapper[5108]: I1212 14:14:43.802775 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-serving-cert\"" Dec 12 14:14:43 crc kubenswrapper[5108]: I1212 14:14:43.928751 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-session\"" Dec 12 14:14:43 crc kubenswrapper[5108]: I1212 14:14:43.980354 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"openshift-service-ca.crt\"" Dec 12 14:14:44 crc kubenswrapper[5108]: I1212 14:14:44.469493 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"openshift-service-ca.crt\"" Dec 12 14:14:44 crc kubenswrapper[5108]: I1212 14:14:44.794575 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-service-ca-bundle\"" Dec 12 14:14:44 crc kubenswrapper[5108]: I1212 14:14:44.963492 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-service-ca\"" Dec 12 14:14:45 crc kubenswrapper[5108]: I1212 14:14:45.123583 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"audit-1\"" Dec 12 14:14:45 crc kubenswrapper[5108]: I1212 14:14:45.129983 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Dec 12 14:14:45 crc kubenswrapper[5108]: I1212 14:14:45.481333 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-dockercfg-2wbn2\"" Dec 12 14:14:45 crc kubenswrapper[5108]: I1212 14:14:45.637375 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-dockercfg-6c46w\"" Dec 12 14:14:45 crc kubenswrapper[5108]: I1212 14:14:45.697354 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"openshift-config-operator-dockercfg-sjn6s\"" Dec 12 14:14:45 crc kubenswrapper[5108]: I1212 14:14:45.713350 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-dockercfg-dzw6b\"" Dec 12 14:14:45 crc kubenswrapper[5108]: I1212 14:14:45.861348 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"node-resolver-dockercfg-tk7bt\"" Dec 12 14:14:45 crc kubenswrapper[5108]: I1212 14:14:45.938702 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-dockercfg-bjqfd\"" Dec 12 14:14:45 crc kubenswrapper[5108]: I1212 14:14:45.949628 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"trusted-ca-bundle\"" Dec 12 14:14:45 crc kubenswrapper[5108]: I1212 14:14:45.965384 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-l2v2m\"" Dec 12 14:14:46 crc kubenswrapper[5108]: I1212 14:14:46.013654 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"kube-root-ca.crt\"" Dec 12 14:14:46 crc kubenswrapper[5108]: I1212 14:14:46.129250 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"openshift-service-ca.crt\"" Dec 12 14:14:46 crc kubenswrapper[5108]: I1212 14:14:46.190475 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Dec 12 14:14:46 crc kubenswrapper[5108]: I1212 14:14:46.387854 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"oauth-openshift-dockercfg-d2bf2\"" Dec 12 14:14:46 crc kubenswrapper[5108]: I1212 14:14:46.398356 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"openshift-service-ca.crt\"" Dec 12 14:14:46 crc kubenswrapper[5108]: I1212 14:14:46.500938 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"openshift-apiserver-sa-dockercfg-4zqgh\"" Dec 12 14:14:46 crc kubenswrapper[5108]: I1212 14:14:46.538525 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"kube-root-ca.crt\"" Dec 12 14:14:46 crc kubenswrapper[5108]: I1212 14:14:46.637963 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-ca-bundle\"" Dec 12 14:14:46 crc kubenswrapper[5108]: I1212 14:14:46.671605 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"signing-key\"" Dec 12 14:14:46 crc kubenswrapper[5108]: I1212 14:14:46.674035 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"openshift-service-ca.crt\"" Dec 12 14:14:46 crc kubenswrapper[5108]: I1212 14:14:46.744225 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-serving-cert\"" Dec 12 14:14:46 crc kubenswrapper[5108]: I1212 14:14:46.750530 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"openshift-service-ca.crt\"" Dec 12 14:14:46 crc kubenswrapper[5108]: I1212 14:14:46.775406 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"kube-root-ca.crt\"" Dec 12 14:14:46 crc kubenswrapper[5108]: I1212 14:14:46.915183 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-images\"" Dec 12 14:14:47 crc kubenswrapper[5108]: I1212 14:14:47.020633 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-root-ca.crt\"" Dec 12 14:14:47 crc kubenswrapper[5108]: I1212 14:14:47.021859 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"serving-cert\"" Dec 12 14:14:47 crc kubenswrapper[5108]: I1212 14:14:47.081061 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"trusted-ca\"" Dec 12 14:14:47 crc kubenswrapper[5108]: I1212 14:14:47.156672 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"client-ca\"" Dec 12 14:14:47 crc kubenswrapper[5108]: I1212 14:14:47.265828 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\"" Dec 12 14:14:47 crc kubenswrapper[5108]: I1212 14:14:47.394824 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ac-dockercfg-gj7jx\"" Dec 12 14:14:47 crc kubenswrapper[5108]: I1212 14:14:47.445984 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"node-bootstrapper-token\"" Dec 12 14:14:47 crc kubenswrapper[5108]: I1212 14:14:47.481349 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-storage-version-migrator-sa-dockercfg-kknhg\"" Dec 12 14:14:47 crc kubenswrapper[5108]: I1212 14:14:47.483724 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"cluster-image-registry-operator-dockercfg-ntnd7\"" Dec 12 14:14:47 crc kubenswrapper[5108]: I1212 14:14:47.503356 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"service-ca-bundle\"" Dec 12 14:14:47 crc kubenswrapper[5108]: I1212 14:14:47.667245 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"kube-root-ca.crt\"" Dec 12 14:14:47 crc kubenswrapper[5108]: I1212 14:14:47.671141 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-config\"" Dec 12 14:14:47 crc kubenswrapper[5108]: I1212 14:14:47.686663 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"canary-serving-cert\"" Dec 12 14:14:47 crc kubenswrapper[5108]: I1212 14:14:47.830432 5108 reflector.go:430] "Caches populated" type="*v1.Pod" reflector="pkg/kubelet/config/apiserver.go:66" Dec 12 14:14:47 crc kubenswrapper[5108]: I1212 14:14:47.834580 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 12 14:14:47 crc kubenswrapper[5108]: I1212 14:14:47.834637 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 12 14:14:47 crc kubenswrapper[5108]: I1212 14:14:47.844810 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 14:14:47 crc kubenswrapper[5108]: I1212 14:14:47.858940 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=19.858920138 podStartE2EDuration="19.858920138s" podCreationTimestamp="2025-12-12 14:14:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 14:14:47.857384184 +0000 UTC m=+240.765375353" watchObservedRunningTime="2025-12-12 14:14:47.858920138 +0000 UTC m=+240.766911327" Dec 12 14:14:47 crc kubenswrapper[5108]: I1212 14:14:47.877427 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"node-ca-dockercfg-tjs74\"" Dec 12 14:14:47 crc kubenswrapper[5108]: I1212 14:14:47.913374 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"openshift-service-ca.crt\"" Dec 12 14:14:48 crc kubenswrapper[5108]: I1212 14:14:48.129991 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"openshift-service-ca.crt\"" Dec 12 14:14:48 crc kubenswrapper[5108]: I1212 14:14:48.254847 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"kube-root-ca.crt\"" Dec 12 14:14:48 crc kubenswrapper[5108]: I1212 14:14:48.329324 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-mmcpt\"" Dec 12 14:14:48 crc kubenswrapper[5108]: I1212 14:14:48.334345 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-tls\"" Dec 12 14:14:48 crc kubenswrapper[5108]: I1212 14:14:48.336572 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"openshift-service-ca.crt\"" Dec 12 14:14:48 crc kubenswrapper[5108]: I1212 14:14:48.416255 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"serving-cert\"" Dec 12 14:14:48 crc kubenswrapper[5108]: I1212 14:14:48.524123 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-6w67b\"" Dec 12 14:14:48 crc kubenswrapper[5108]: I1212 14:14:48.538104 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-scheduler-operator-serving-cert\"" Dec 12 14:14:48 crc kubenswrapper[5108]: I1212 14:14:48.542132 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-root-ca.crt\"" Dec 12 14:14:48 crc kubenswrapper[5108]: I1212 14:14:48.672809 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-serving-cert\"" Dec 12 14:14:48 crc kubenswrapper[5108]: I1212 14:14:48.809970 5108 reflector.go:430] "Caches populated" type="*v1.Service" reflector="k8s.io/client-go/informers/factory.go:160" Dec 12 14:14:48 crc kubenswrapper[5108]: I1212 14:14:48.834662 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"openshift-service-ca.crt\"" Dec 12 14:14:48 crc kubenswrapper[5108]: I1212 14:14:48.934191 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-tls\"" Dec 12 14:14:48 crc kubenswrapper[5108]: I1212 14:14:48.992540 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-control-plane-dockercfg-nl8tp\"" Dec 12 14:14:49 crc kubenswrapper[5108]: I1212 14:14:49.185848 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-config\"" Dec 12 14:14:49 crc kubenswrapper[5108]: I1212 14:14:49.202324 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"dns-operator-dockercfg-wbbsn\"" Dec 12 14:14:49 crc kubenswrapper[5108]: I1212 14:14:49.401796 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"trusted-ca\"" Dec 12 14:14:49 crc kubenswrapper[5108]: I1212 14:14:49.488495 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-rbac-proxy\"" Dec 12 14:14:49 crc kubenswrapper[5108]: I1212 14:14:49.808266 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-config\"" Dec 12 14:14:49 crc kubenswrapper[5108]: I1212 14:14:49.931148 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-dockercfg-4vdnc\"" Dec 12 14:14:49 crc kubenswrapper[5108]: I1212 14:14:49.986915 5108 patch_prober.go:28] interesting pod/machine-config-daemon-w294k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 14:14:49 crc kubenswrapper[5108]: I1212 14:14:49.986990 5108 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w294k" podUID="fcb30c12-8b29-461d-ab3e-a76577b664d6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 14:14:50 crc kubenswrapper[5108]: I1212 14:14:50.082836 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"networking-console-plugin-cert\"" Dec 12 14:14:50 crc kubenswrapper[5108]: I1212 14:14:50.121945 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"config\"" Dec 12 14:14:50 crc kubenswrapper[5108]: I1212 14:14:50.141030 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\"" Dec 12 14:14:50 crc kubenswrapper[5108]: I1212 14:14:50.187245 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Dec 12 14:14:50 crc kubenswrapper[5108]: I1212 14:14:50.192887 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"config\"" Dec 12 14:14:50 crc kubenswrapper[5108]: I1212 14:14:50.254004 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"cluster-samples-operator-dockercfg-jmhxf\"" Dec 12 14:14:50 crc kubenswrapper[5108]: I1212 14:14:50.295687 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"trusted-ca-bundle\"" Dec 12 14:14:50 crc kubenswrapper[5108]: I1212 14:14:50.434406 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\"" Dec 12 14:14:50 crc kubenswrapper[5108]: I1212 14:14:50.459862 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-dockercfg-sw6nc\"" Dec 12 14:14:50 crc kubenswrapper[5108]: I1212 14:14:50.530706 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"kube-root-ca.crt\"" Dec 12 14:14:50 crc kubenswrapper[5108]: I1212 14:14:50.566208 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\"" Dec 12 14:14:50 crc kubenswrapper[5108]: I1212 14:14:50.787645 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"config\"" Dec 12 14:14:50 crc kubenswrapper[5108]: I1212 14:14:50.792766 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"serving-cert\"" Dec 12 14:14:50 crc kubenswrapper[5108]: I1212 14:14:50.804505 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"openshift-service-ca.crt\"" Dec 12 14:14:50 crc kubenswrapper[5108]: I1212 14:14:50.859338 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-node-identity\"/\"network-node-identity-cert\"" Dec 12 14:14:50 crc kubenswrapper[5108]: I1212 14:14:50.904330 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-router-certs\"" Dec 12 14:14:50 crc kubenswrapper[5108]: I1212 14:14:50.915053 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-dockercfg-jcmfj\"" Dec 12 14:14:50 crc kubenswrapper[5108]: I1212 14:14:50.920819 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"kube-root-ca.crt\"" Dec 12 14:14:51 crc kubenswrapper[5108]: I1212 14:14:51.038396 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"kube-root-ca.crt\"" Dec 12 14:14:51 crc kubenswrapper[5108]: I1212 14:14:51.067155 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"openshift-service-ca.crt\"" Dec 12 14:14:51 crc kubenswrapper[5108]: I1212 14:14:51.073275 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"kube-root-ca.crt\"" Dec 12 14:14:51 crc kubenswrapper[5108]: I1212 14:14:51.082816 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"openshift-service-ca.crt\"" Dec 12 14:14:51 crc kubenswrapper[5108]: I1212 14:14:51.132182 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"openshift-service-ca.crt\"" Dec 12 14:14:51 crc kubenswrapper[5108]: I1212 14:14:51.173911 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"default-dockercfg-g6kgg\"" Dec 12 14:14:51 crc kubenswrapper[5108]: I1212 14:14:51.272646 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-global-ca\"" Dec 12 14:14:51 crc kubenswrapper[5108]: I1212 14:14:51.290744 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-root-ca.crt\"" Dec 12 14:14:51 crc kubenswrapper[5108]: I1212 14:14:51.296731 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-root-ca.crt\"" Dec 12 14:14:51 crc kubenswrapper[5108]: I1212 14:14:51.302342 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"packageserver-service-cert\"" Dec 12 14:14:51 crc kubenswrapper[5108]: I1212 14:14:51.302450 5108 reflector.go:430] "Caches populated" type="*v1.CSIDriver" reflector="k8s.io/client-go/informers/factory.go:160" Dec 12 14:14:51 crc kubenswrapper[5108]: I1212 14:14:51.306340 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"whereabouts-flatfile-config\"" Dec 12 14:14:51 crc kubenswrapper[5108]: I1212 14:14:51.335606 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-oauth-config\"" Dec 12 14:14:51 crc kubenswrapper[5108]: I1212 14:14:51.395370 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"samples-operator-tls\"" Dec 12 14:14:51 crc kubenswrapper[5108]: I1212 14:14:51.400050 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"kube-root-ca.crt\"" Dec 12 14:14:51 crc kubenswrapper[5108]: I1212 14:14:51.446984 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-9pgs7\"" Dec 12 14:14:51 crc kubenswrapper[5108]: I1212 14:14:51.455797 5108 kubelet.go:2547] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 12 14:14:51 crc kubenswrapper[5108]: I1212 14:14:51.456183 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" containerID="cri-o://db171e7a51684b27d84082f1717be668c870b74c1aaa23e610bd48bc230ff78a" gracePeriod=5 Dec 12 14:14:51 crc kubenswrapper[5108]: I1212 14:14:51.519526 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-t8n29\"" Dec 12 14:14:51 crc kubenswrapper[5108]: I1212 14:14:51.578704 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-config\"" Dec 12 14:14:51 crc kubenswrapper[5108]: I1212 14:14:51.968549 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"console-operator-config\"" Dec 12 14:14:52 crc kubenswrapper[5108]: I1212 14:14:52.014428 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-default-metrics-tls\"" Dec 12 14:14:52 crc kubenswrapper[5108]: I1212 14:14:52.055568 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"image-registry-certificates\"" Dec 12 14:14:52 crc kubenswrapper[5108]: I1212 14:14:52.076010 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Dec 12 14:14:52 crc kubenswrapper[5108]: I1212 14:14:52.115193 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-rbac-proxy\"" Dec 12 14:14:52 crc kubenswrapper[5108]: I1212 14:14:52.160848 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"default-dockercfg-mdwwj\"" Dec 12 14:14:52 crc kubenswrapper[5108]: I1212 14:14:52.234608 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-config\"" Dec 12 14:14:52 crc kubenswrapper[5108]: I1212 14:14:52.279142 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"kube-root-ca.crt\"" Dec 12 14:14:52 crc kubenswrapper[5108]: I1212 14:14:52.289484 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-tls\"" Dec 12 14:14:52 crc kubenswrapper[5108]: I1212 14:14:52.366559 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-trusted-ca-bundle\"" Dec 12 14:14:52 crc kubenswrapper[5108]: I1212 14:14:52.562228 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"serving-cert\"" Dec 12 14:14:52 crc kubenswrapper[5108]: I1212 14:14:52.758710 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mco-proxy-tls\"" Dec 12 14:14:52 crc kubenswrapper[5108]: I1212 14:14:52.781656 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"openshift-service-ca.crt\"" Dec 12 14:14:52 crc kubenswrapper[5108]: I1212 14:14:52.812843 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"kube-root-ca.crt\"" Dec 12 14:14:53 crc kubenswrapper[5108]: I1212 14:14:53.010251 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-config\"" Dec 12 14:14:53 crc kubenswrapper[5108]: I1212 14:14:53.045844 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\"" Dec 12 14:14:53 crc kubenswrapper[5108]: I1212 14:14:53.178400 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-tls\"" Dec 12 14:14:53 crc kubenswrapper[5108]: I1212 14:14:53.182987 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-daemon-dockercfg-w9nzh\"" Dec 12 14:14:53 crc kubenswrapper[5108]: I1212 14:14:53.207730 5108 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Dec 12 14:14:53 crc kubenswrapper[5108]: I1212 14:14:53.207815 5108 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Dec 12 14:14:53 crc kubenswrapper[5108]: I1212 14:14:53.207871 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 14:14:53 crc kubenswrapper[5108]: I1212 14:14:53.208650 5108 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"243d8d984f6a47df4456767e95a62dae1c82289d5a69755c73b614103e41fc16"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Dec 12 14:14:53 crc kubenswrapper[5108]: I1212 14:14:53.208750 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="kube-controller-manager" containerID="cri-o://243d8d984f6a47df4456767e95a62dae1c82289d5a69755c73b614103e41fc16" gracePeriod=30 Dec 12 14:14:53 crc kubenswrapper[5108]: I1212 14:14:53.216594 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"client-ca\"" Dec 12 14:14:53 crc kubenswrapper[5108]: I1212 14:14:53.217633 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Dec 12 14:14:53 crc kubenswrapper[5108]: I1212 14:14:53.219892 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-dockercfg-8dkm8\"" Dec 12 14:14:53 crc kubenswrapper[5108]: I1212 14:14:53.305200 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Dec 12 14:14:53 crc kubenswrapper[5108]: I1212 14:14:53.340455 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"kube-root-ca.crt\"" Dec 12 14:14:53 crc kubenswrapper[5108]: I1212 14:14:53.353925 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"kube-root-ca.crt\"" Dec 12 14:14:53 crc kubenswrapper[5108]: I1212 14:14:53.524751 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"encryption-config-1\"" Dec 12 14:14:53 crc kubenswrapper[5108]: I1212 14:14:53.618255 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"env-overrides\"" Dec 12 14:14:53 crc kubenswrapper[5108]: I1212 14:14:53.658860 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"service-ca\"" Dec 12 14:14:53 crc kubenswrapper[5108]: I1212 14:14:53.875029 5108 reflector.go:430] "Caches populated" type="*v1.Node" reflector="k8s.io/client-go/informers/factory.go:160" Dec 12 14:14:53 crc kubenswrapper[5108]: I1212 14:14:53.922921 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serving-cert\"" Dec 12 14:14:53 crc kubenswrapper[5108]: I1212 14:14:53.974860 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-serving-cert\"" Dec 12 14:14:54 crc kubenswrapper[5108]: I1212 14:14:54.070442 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"kube-root-ca.crt\"" Dec 12 14:14:54 crc kubenswrapper[5108]: I1212 14:14:54.106270 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"console-operator-dockercfg-kl6m8\"" Dec 12 14:14:54 crc kubenswrapper[5108]: I1212 14:14:54.172437 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-login\"" Dec 12 14:14:54 crc kubenswrapper[5108]: I1212 14:14:54.240938 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"kube-root-ca.crt\"" Dec 12 14:14:54 crc kubenswrapper[5108]: I1212 14:14:54.313334 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-root-ca.crt\"" Dec 12 14:14:54 crc kubenswrapper[5108]: I1212 14:14:54.331853 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\"" Dec 12 14:14:54 crc kubenswrapper[5108]: I1212 14:14:54.394976 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-dockercfg-6n5ln\"" Dec 12 14:14:54 crc kubenswrapper[5108]: I1212 14:14:54.596630 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-admission-controller-secret\"" Dec 12 14:14:54 crc kubenswrapper[5108]: I1212 14:14:54.792334 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"etcd-client\"" Dec 12 14:14:54 crc kubenswrapper[5108]: I1212 14:14:54.806601 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"config-operator-serving-cert\"" Dec 12 14:14:54 crc kubenswrapper[5108]: I1212 14:14:54.896882 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"kube-root-ca.crt\"" Dec 12 14:14:54 crc kubenswrapper[5108]: I1212 14:14:54.996589 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"metrics-tls\"" Dec 12 14:14:55 crc kubenswrapper[5108]: I1212 14:14:55.105130 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-tls\"" Dec 12 14:14:55 crc kubenswrapper[5108]: I1212 14:14:55.160179 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Dec 12 14:14:55 crc kubenswrapper[5108]: I1212 14:14:55.197451 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"hostpath-provisioner\"/\"csi-hostpath-provisioner-sa-dockercfg-7dcws\"" Dec 12 14:14:55 crc kubenswrapper[5108]: I1212 14:14:55.255650 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"kube-root-ca.crt\"" Dec 12 14:14:55 crc kubenswrapper[5108]: I1212 14:14:55.258692 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-ocp-branding-template\"" Dec 12 14:14:55 crc kubenswrapper[5108]: I1212 14:14:55.337658 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-serving-cert\"" Dec 12 14:14:55 crc kubenswrapper[5108]: I1212 14:14:55.338067 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-dockercfg-6tbpn\"" Dec 12 14:14:55 crc kubenswrapper[5108]: I1212 14:14:55.350759 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"ingress-operator-dockercfg-74nwh\"" Dec 12 14:14:55 crc kubenswrapper[5108]: I1212 14:14:55.373544 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"multus-daemon-config\"" Dec 12 14:14:55 crc kubenswrapper[5108]: I1212 14:14:55.443004 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-root-ca.crt\"" Dec 12 14:14:55 crc kubenswrapper[5108]: I1212 14:14:55.508953 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"openshift-service-ca.crt\"" Dec 12 14:14:55 crc kubenswrapper[5108]: I1212 14:14:55.573789 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"kube-root-ca.crt\"" Dec 12 14:14:55 crc kubenswrapper[5108]: I1212 14:14:55.584222 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serviceaccount-dockercfg-4gqzj\"" Dec 12 14:14:55 crc kubenswrapper[5108]: I1212 14:14:55.585752 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"pprof-cert\"" Dec 12 14:14:55 crc kubenswrapper[5108]: I1212 14:14:55.694351 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"dns-default\"" Dec 12 14:14:55 crc kubenswrapper[5108]: I1212 14:14:55.711235 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-dockercfg-2h6bs\"" Dec 12 14:14:55 crc kubenswrapper[5108]: I1212 14:14:55.725692 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-images\"" Dec 12 14:14:55 crc kubenswrapper[5108]: I1212 14:14:55.791305 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-nwglk\"" Dec 12 14:14:55 crc kubenswrapper[5108]: I1212 14:14:55.911761 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\"" Dec 12 14:14:55 crc kubenswrapper[5108]: I1212 14:14:55.973812 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"encryption-config-1\"" Dec 12 14:14:56 crc kubenswrapper[5108]: I1212 14:14:56.440540 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"trusted-ca-bundle\"" Dec 12 14:14:56 crc kubenswrapper[5108]: I1212 14:14:56.516215 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"openshift-service-ca.crt\"" Dec 12 14:14:57 crc kubenswrapper[5108]: I1212 14:14:57.026614 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f7dbc7e1ee9c187a863ef9b473fad27b/startup-monitor/0.log" Dec 12 14:14:57 crc kubenswrapper[5108]: I1212 14:14:57.026998 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 14:14:57 crc kubenswrapper[5108]: I1212 14:14:57.029125 5108 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="pods \"kube-apiserver-startup-monitor-crc\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-kube-apiserver\": no relationship found between node 'crc' and this object" Dec 12 14:14:57 crc kubenswrapper[5108]: I1212 14:14:57.156771 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f7dbc7e1ee9c187a863ef9b473fad27b/startup-monitor/0.log" Dec 12 14:14:57 crc kubenswrapper[5108]: I1212 14:14:57.156856 5108 generic.go:358] "Generic (PLEG): container finished" podID="f7dbc7e1ee9c187a863ef9b473fad27b" containerID="db171e7a51684b27d84082f1717be668c870b74c1aaa23e610bd48bc230ff78a" exitCode=137 Dec 12 14:14:57 crc kubenswrapper[5108]: I1212 14:14:57.157008 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 14:14:57 crc kubenswrapper[5108]: I1212 14:14:57.157055 5108 scope.go:117] "RemoveContainer" containerID="db171e7a51684b27d84082f1717be668c870b74c1aaa23e610bd48bc230ff78a" Dec 12 14:14:57 crc kubenswrapper[5108]: I1212 14:14:57.161572 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Dec 12 14:14:57 crc kubenswrapper[5108]: I1212 14:14:57.161710 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests" (OuterVolumeSpecName: "manifests") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 14:14:57 crc kubenswrapper[5108]: I1212 14:14:57.161714 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Dec 12 14:14:57 crc kubenswrapper[5108]: I1212 14:14:57.161795 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Dec 12 14:14:57 crc kubenswrapper[5108]: I1212 14:14:57.161827 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Dec 12 14:14:57 crc kubenswrapper[5108]: I1212 14:14:57.161955 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Dec 12 14:14:57 crc kubenswrapper[5108]: I1212 14:14:57.161959 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log" (OuterVolumeSpecName: "var-log") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 14:14:57 crc kubenswrapper[5108]: I1212 14:14:57.162014 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock" (OuterVolumeSpecName: "var-lock") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 14:14:57 crc kubenswrapper[5108]: I1212 14:14:57.162012 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 14:14:57 crc kubenswrapper[5108]: I1212 14:14:57.162606 5108 reconciler_common.go:299] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") on node \"crc\" DevicePath \"\"" Dec 12 14:14:57 crc kubenswrapper[5108]: I1212 14:14:57.162623 5108 reconciler_common.go:299] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") on node \"crc\" DevicePath \"\"" Dec 12 14:14:57 crc kubenswrapper[5108]: I1212 14:14:57.162642 5108 reconciler_common.go:299] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 12 14:14:57 crc kubenswrapper[5108]: I1212 14:14:57.162654 5108 reconciler_common.go:299] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") on node \"crc\" DevicePath \"\"" Dec 12 14:14:57 crc kubenswrapper[5108]: I1212 14:14:57.174422 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 14:14:57 crc kubenswrapper[5108]: I1212 14:14:57.186333 5108 scope.go:117] "RemoveContainer" containerID="db171e7a51684b27d84082f1717be668c870b74c1aaa23e610bd48bc230ff78a" Dec 12 14:14:57 crc kubenswrapper[5108]: E1212 14:14:57.187075 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db171e7a51684b27d84082f1717be668c870b74c1aaa23e610bd48bc230ff78a\": container with ID starting with db171e7a51684b27d84082f1717be668c870b74c1aaa23e610bd48bc230ff78a not found: ID does not exist" containerID="db171e7a51684b27d84082f1717be668c870b74c1aaa23e610bd48bc230ff78a" Dec 12 14:14:57 crc kubenswrapper[5108]: I1212 14:14:57.187148 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db171e7a51684b27d84082f1717be668c870b74c1aaa23e610bd48bc230ff78a"} err="failed to get container status \"db171e7a51684b27d84082f1717be668c870b74c1aaa23e610bd48bc230ff78a\": rpc error: code = NotFound desc = could not find container \"db171e7a51684b27d84082f1717be668c870b74c1aaa23e610bd48bc230ff78a\": container with ID starting with db171e7a51684b27d84082f1717be668c870b74c1aaa23e610bd48bc230ff78a not found: ID does not exist" Dec 12 14:14:57 crc kubenswrapper[5108]: I1212 14:14:57.221527 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"kube-root-ca.crt\"" Dec 12 14:14:57 crc kubenswrapper[5108]: I1212 14:14:57.263717 5108 reconciler_common.go:299] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 12 14:14:57 crc kubenswrapper[5108]: I1212 14:14:57.414010 5108 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="pods \"kube-apiserver-startup-monitor-crc\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-kube-apiserver\": no relationship found between node 'crc' and this object" Dec 12 14:14:57 crc kubenswrapper[5108]: I1212 14:14:57.414514 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" path="/var/lib/kubelet/pods/f7dbc7e1ee9c187a863ef9b473fad27b/volumes" Dec 12 14:14:57 crc kubenswrapper[5108]: I1212 14:14:57.466202 5108 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="pods \"kube-apiserver-startup-monitor-crc\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-kube-apiserver\": no relationship found between node 'crc' and this object" Dec 12 14:14:57 crc kubenswrapper[5108]: I1212 14:14:57.626860 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"package-server-manager-serving-cert\"" Dec 12 14:14:57 crc kubenswrapper[5108]: I1212 14:14:57.741975 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-error\"" Dec 12 14:14:58 crc kubenswrapper[5108]: I1212 14:14:58.703654 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-root-ca.crt\"" Dec 12 14:15:19 crc kubenswrapper[5108]: I1212 14:15:19.986347 5108 patch_prober.go:28] interesting pod/machine-config-daemon-w294k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 14:15:19 crc kubenswrapper[5108]: I1212 14:15:19.987959 5108 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w294k" podUID="fcb30c12-8b29-461d-ab3e-a76577b664d6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 14:15:24 crc kubenswrapper[5108]: I1212 14:15:24.305583 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Dec 12 14:15:24 crc kubenswrapper[5108]: I1212 14:15:24.307597 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 12 14:15:24 crc kubenswrapper[5108]: I1212 14:15:24.307644 5108 generic.go:358] "Generic (PLEG): container finished" podID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerID="243d8d984f6a47df4456767e95a62dae1c82289d5a69755c73b614103e41fc16" exitCode=137 Dec 12 14:15:24 crc kubenswrapper[5108]: I1212 14:15:24.307745 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerDied","Data":"243d8d984f6a47df4456767e95a62dae1c82289d5a69755c73b614103e41fc16"} Dec 12 14:15:24 crc kubenswrapper[5108]: I1212 14:15:24.307794 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"d2fbc02969b6d1a834b6c543522c48a1d7d9a84c9df205189128252cfa89e95b"} Dec 12 14:15:24 crc kubenswrapper[5108]: I1212 14:15:24.307814 5108 scope.go:117] "RemoveContainer" containerID="3071f4069d4990c1608adf401cdef4fd74f2c0fa8bfa273c3449a82eb8145bd1" Dec 12 14:15:24 crc kubenswrapper[5108]: I1212 14:15:24.483275 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 14:15:25 crc kubenswrapper[5108]: I1212 14:15:25.313709 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Dec 12 14:15:28 crc kubenswrapper[5108]: I1212 14:15:28.321767 5108 ???:1] "http: TLS handshake error from 192.168.126.11:54748: no serving certificate available for the kubelet" Dec 12 14:15:33 crc kubenswrapper[5108]: I1212 14:15:33.207345 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 14:15:33 crc kubenswrapper[5108]: I1212 14:15:33.210716 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 14:15:33 crc kubenswrapper[5108]: I1212 14:15:33.363215 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 14:15:34 crc kubenswrapper[5108]: I1212 14:15:34.119834 5108 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 12 14:15:39 crc kubenswrapper[5108]: I1212 14:15:39.644470 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fg9h4"] Dec 12 14:15:39 crc kubenswrapper[5108]: I1212 14:15:39.645302 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-fg9h4" podUID="35559d5b-861e-4f71-b4fe-9cafa147f46b" containerName="registry-server" containerID="cri-o://e9caf7c47d887ac7bfb7a1315292f0be164ca2255932db9a3edae12b4b6e8a24" gracePeriod=30 Dec 12 14:15:39 crc kubenswrapper[5108]: I1212 14:15:39.660261 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-99vmq"] Dec 12 14:15:39 crc kubenswrapper[5108]: I1212 14:15:39.660655 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-99vmq" podUID="f963a2e4-7bac-4938-ba67-f65a48ac4806" containerName="registry-server" containerID="cri-o://163f3d453023834c4a791e6d5adf23994745596c3991639c936be6dd47b3323a" gracePeriod=30 Dec 12 14:15:39 crc kubenswrapper[5108]: I1212 14:15:39.663820 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-l5t96"] Dec 12 14:15:39 crc kubenswrapper[5108]: I1212 14:15:39.664069 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-547dbd544d-l5t96" podUID="378c9b4b-6598-489c-9af4-b776c79341f6" containerName="marketplace-operator" containerID="cri-o://da0cdb6c8a79d498fe6f1fc177d7e076f7294e841470d4e790506397d0fbda6c" gracePeriod=30 Dec 12 14:15:39 crc kubenswrapper[5108]: I1212 14:15:39.678159 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ddbtk"] Dec 12 14:15:39 crc kubenswrapper[5108]: I1212 14:15:39.693184 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-ddbtk" podUID="9d7df9af-f2f2-4ff9-a6be-f3b20aa4f91b" containerName="registry-server" containerID="cri-o://b3361c9227c5e27461010ad1c5da9291ec9958a2e4ead2eac2a43aab3afbe747" gracePeriod=30 Dec 12 14:15:39 crc kubenswrapper[5108]: I1212 14:15:39.695745 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bfb2d"] Dec 12 14:15:39 crc kubenswrapper[5108]: I1212 14:15:39.696216 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-bfb2d" podUID="b05ea99a-815e-48ce-b4bb-1efda1405964" containerName="registry-server" containerID="cri-o://e42522eda8b1a23c955d0531f95ea2f76d098d4393146dd84878a438d3bbc73f" gracePeriod=30 Dec 12 14:15:39 crc kubenswrapper[5108]: E1212 14:15:39.735303 5108 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e42522eda8b1a23c955d0531f95ea2f76d098d4393146dd84878a438d3bbc73f is running failed: container process not found" containerID="e42522eda8b1a23c955d0531f95ea2f76d098d4393146dd84878a438d3bbc73f" cmd=["grpc_health_probe","-addr=:50051"] Dec 12 14:15:39 crc kubenswrapper[5108]: E1212 14:15:39.735850 5108 log.go:32] "ExecSync cmd from runtime service failed" err=< Dec 12 14:15:39 crc kubenswrapper[5108]: rpc error: code = Unknown desc = command error: read pipe failed Dec 12 14:15:39 crc kubenswrapper[5108]: , stdout: , stderr: , exit code -1 Dec 12 14:15:39 crc kubenswrapper[5108]: > containerID="e42522eda8b1a23c955d0531f95ea2f76d098d4393146dd84878a438d3bbc73f" cmd=["grpc_health_probe","-addr=:50051"] Dec 12 14:15:39 crc kubenswrapper[5108]: E1212 14:15:39.735979 5108 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e42522eda8b1a23c955d0531f95ea2f76d098d4393146dd84878a438d3bbc73f is running failed: container process not found" containerID="e42522eda8b1a23c955d0531f95ea2f76d098d4393146dd84878a438d3bbc73f" cmd=["grpc_health_probe","-addr=:50051"] Dec 12 14:15:39 crc kubenswrapper[5108]: E1212 14:15:39.736482 5108 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e42522eda8b1a23c955d0531f95ea2f76d098d4393146dd84878a438d3bbc73f is running failed: container process not found" containerID="e42522eda8b1a23c955d0531f95ea2f76d098d4393146dd84878a438d3bbc73f" cmd=["grpc_health_probe","-addr=:50051"] Dec 12 14:15:39 crc kubenswrapper[5108]: E1212 14:15:39.736527 5108 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e42522eda8b1a23c955d0531f95ea2f76d098d4393146dd84878a438d3bbc73f is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-operators-bfb2d" podUID="b05ea99a-815e-48ce-b4bb-1efda1405964" containerName="registry-server" probeResult="unknown" Dec 12 14:15:39 crc kubenswrapper[5108]: E1212 14:15:39.736618 5108 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e42522eda8b1a23c955d0531f95ea2f76d098d4393146dd84878a438d3bbc73f is running failed: container process not found" containerID="e42522eda8b1a23c955d0531f95ea2f76d098d4393146dd84878a438d3bbc73f" cmd=["grpc_health_probe","-addr=:50051"] Dec 12 14:15:39 crc kubenswrapper[5108]: E1212 14:15:39.736970 5108 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e42522eda8b1a23c955d0531f95ea2f76d098d4393146dd84878a438d3bbc73f is running failed: container process not found" containerID="e42522eda8b1a23c955d0531f95ea2f76d098d4393146dd84878a438d3bbc73f" cmd=["grpc_health_probe","-addr=:50051"] Dec 12 14:15:39 crc kubenswrapper[5108]: E1212 14:15:39.736999 5108 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e42522eda8b1a23c955d0531f95ea2f76d098d4393146dd84878a438d3bbc73f is running failed: container process not found" probeType="Liveness" pod="openshift-marketplace/redhat-operators-bfb2d" podUID="b05ea99a-815e-48ce-b4bb-1efda1405964" containerName="registry-server" probeResult="unknown" Dec 12 14:15:40 crc kubenswrapper[5108]: E1212 14:15:40.047162 5108 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d7df9af_f2f2_4ff9_a6be_f3b20aa4f91b.slice/crio-conmon-b3361c9227c5e27461010ad1c5da9291ec9958a2e4ead2eac2a43aab3afbe747.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb05ea99a_815e_48ce_b4bb_1efda1405964.slice/crio-conmon-e42522eda8b1a23c955d0531f95ea2f76d098d4393146dd84878a438d3bbc73f.scope\": RecentStats: unable to find data in memory cache]" Dec 12 14:15:40 crc kubenswrapper[5108]: I1212 14:15:40.156215 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fg9h4" Dec 12 14:15:40 crc kubenswrapper[5108]: I1212 14:15:40.168456 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ddbtk" Dec 12 14:15:40 crc kubenswrapper[5108]: I1212 14:15:40.236400 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-77fwk\" (UniqueName: \"kubernetes.io/projected/35559d5b-861e-4f71-b4fe-9cafa147f46b-kube-api-access-77fwk\") pod \"35559d5b-861e-4f71-b4fe-9cafa147f46b\" (UID: \"35559d5b-861e-4f71-b4fe-9cafa147f46b\") " Dec 12 14:15:40 crc kubenswrapper[5108]: I1212 14:15:40.236488 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bmgvh\" (UniqueName: \"kubernetes.io/projected/9d7df9af-f2f2-4ff9-a6be-f3b20aa4f91b-kube-api-access-bmgvh\") pod \"9d7df9af-f2f2-4ff9-a6be-f3b20aa4f91b\" (UID: \"9d7df9af-f2f2-4ff9-a6be-f3b20aa4f91b\") " Dec 12 14:15:40 crc kubenswrapper[5108]: I1212 14:15:40.236545 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d7df9af-f2f2-4ff9-a6be-f3b20aa4f91b-catalog-content\") pod \"9d7df9af-f2f2-4ff9-a6be-f3b20aa4f91b\" (UID: \"9d7df9af-f2f2-4ff9-a6be-f3b20aa4f91b\") " Dec 12 14:15:40 crc kubenswrapper[5108]: I1212 14:15:40.236625 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35559d5b-861e-4f71-b4fe-9cafa147f46b-catalog-content\") pod \"35559d5b-861e-4f71-b4fe-9cafa147f46b\" (UID: \"35559d5b-861e-4f71-b4fe-9cafa147f46b\") " Dec 12 14:15:40 crc kubenswrapper[5108]: I1212 14:15:40.236659 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35559d5b-861e-4f71-b4fe-9cafa147f46b-utilities\") pod \"35559d5b-861e-4f71-b4fe-9cafa147f46b\" (UID: \"35559d5b-861e-4f71-b4fe-9cafa147f46b\") " Dec 12 14:15:40 crc kubenswrapper[5108]: I1212 14:15:40.236674 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d7df9af-f2f2-4ff9-a6be-f3b20aa4f91b-utilities\") pod \"9d7df9af-f2f2-4ff9-a6be-f3b20aa4f91b\" (UID: \"9d7df9af-f2f2-4ff9-a6be-f3b20aa4f91b\") " Dec 12 14:15:40 crc kubenswrapper[5108]: I1212 14:15:40.240830 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/35559d5b-861e-4f71-b4fe-9cafa147f46b-utilities" (OuterVolumeSpecName: "utilities") pod "35559d5b-861e-4f71-b4fe-9cafa147f46b" (UID: "35559d5b-861e-4f71-b4fe-9cafa147f46b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:15:40 crc kubenswrapper[5108]: I1212 14:15:40.243595 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35559d5b-861e-4f71-b4fe-9cafa147f46b-kube-api-access-77fwk" (OuterVolumeSpecName: "kube-api-access-77fwk") pod "35559d5b-861e-4f71-b4fe-9cafa147f46b" (UID: "35559d5b-861e-4f71-b4fe-9cafa147f46b"). InnerVolumeSpecName "kube-api-access-77fwk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:15:40 crc kubenswrapper[5108]: I1212 14:15:40.253437 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d7df9af-f2f2-4ff9-a6be-f3b20aa4f91b-kube-api-access-bmgvh" (OuterVolumeSpecName: "kube-api-access-bmgvh") pod "9d7df9af-f2f2-4ff9-a6be-f3b20aa4f91b" (UID: "9d7df9af-f2f2-4ff9-a6be-f3b20aa4f91b"). InnerVolumeSpecName "kube-api-access-bmgvh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:15:40 crc kubenswrapper[5108]: I1212 14:15:40.253669 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d7df9af-f2f2-4ff9-a6be-f3b20aa4f91b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9d7df9af-f2f2-4ff9-a6be-f3b20aa4f91b" (UID: "9d7df9af-f2f2-4ff9-a6be-f3b20aa4f91b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:15:40 crc kubenswrapper[5108]: I1212 14:15:40.277129 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d7df9af-f2f2-4ff9-a6be-f3b20aa4f91b-utilities" (OuterVolumeSpecName: "utilities") pod "9d7df9af-f2f2-4ff9-a6be-f3b20aa4f91b" (UID: "9d7df9af-f2f2-4ff9-a6be-f3b20aa4f91b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:15:40 crc kubenswrapper[5108]: I1212 14:15:40.281727 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-l5t96" Dec 12 14:15:40 crc kubenswrapper[5108]: I1212 14:15:40.290443 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/35559d5b-861e-4f71-b4fe-9cafa147f46b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "35559d5b-861e-4f71-b4fe-9cafa147f46b" (UID: "35559d5b-861e-4f71-b4fe-9cafa147f46b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:15:40 crc kubenswrapper[5108]: I1212 14:15:40.292063 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bfb2d" Dec 12 14:15:40 crc kubenswrapper[5108]: I1212 14:15:40.293828 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-99vmq" Dec 12 14:15:40 crc kubenswrapper[5108]: I1212 14:15:40.337642 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xkq4k\" (UniqueName: \"kubernetes.io/projected/b05ea99a-815e-48ce-b4bb-1efda1405964-kube-api-access-xkq4k\") pod \"b05ea99a-815e-48ce-b4bb-1efda1405964\" (UID: \"b05ea99a-815e-48ce-b4bb-1efda1405964\") " Dec 12 14:15:40 crc kubenswrapper[5108]: I1212 14:15:40.337706 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f963a2e4-7bac-4938-ba67-f65a48ac4806-catalog-content\") pod \"f963a2e4-7bac-4938-ba67-f65a48ac4806\" (UID: \"f963a2e4-7bac-4938-ba67-f65a48ac4806\") " Dec 12 14:15:40 crc kubenswrapper[5108]: I1212 14:15:40.337739 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/378c9b4b-6598-489c-9af4-b776c79341f6-marketplace-operator-metrics\") pod \"378c9b4b-6598-489c-9af4-b776c79341f6\" (UID: \"378c9b4b-6598-489c-9af4-b776c79341f6\") " Dec 12 14:15:40 crc kubenswrapper[5108]: I1212 14:15:40.337795 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b05ea99a-815e-48ce-b4bb-1efda1405964-utilities\") pod \"b05ea99a-815e-48ce-b4bb-1efda1405964\" (UID: \"b05ea99a-815e-48ce-b4bb-1efda1405964\") " Dec 12 14:15:40 crc kubenswrapper[5108]: I1212 14:15:40.337828 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f963a2e4-7bac-4938-ba67-f65a48ac4806-utilities\") pod \"f963a2e4-7bac-4938-ba67-f65a48ac4806\" (UID: \"f963a2e4-7bac-4938-ba67-f65a48ac4806\") " Dec 12 14:15:40 crc kubenswrapper[5108]: I1212 14:15:40.337854 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/378c9b4b-6598-489c-9af4-b776c79341f6-tmp\") pod \"378c9b4b-6598-489c-9af4-b776c79341f6\" (UID: \"378c9b4b-6598-489c-9af4-b776c79341f6\") " Dec 12 14:15:40 crc kubenswrapper[5108]: I1212 14:15:40.337873 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/378c9b4b-6598-489c-9af4-b776c79341f6-marketplace-trusted-ca\") pod \"378c9b4b-6598-489c-9af4-b776c79341f6\" (UID: \"378c9b4b-6598-489c-9af4-b776c79341f6\") " Dec 12 14:15:40 crc kubenswrapper[5108]: I1212 14:15:40.337901 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gz7v5\" (UniqueName: \"kubernetes.io/projected/f963a2e4-7bac-4938-ba67-f65a48ac4806-kube-api-access-gz7v5\") pod \"f963a2e4-7bac-4938-ba67-f65a48ac4806\" (UID: \"f963a2e4-7bac-4938-ba67-f65a48ac4806\") " Dec 12 14:15:40 crc kubenswrapper[5108]: I1212 14:15:40.337975 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b05ea99a-815e-48ce-b4bb-1efda1405964-catalog-content\") pod \"b05ea99a-815e-48ce-b4bb-1efda1405964\" (UID: \"b05ea99a-815e-48ce-b4bb-1efda1405964\") " Dec 12 14:15:40 crc kubenswrapper[5108]: I1212 14:15:40.337991 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7nl9v\" (UniqueName: \"kubernetes.io/projected/378c9b4b-6598-489c-9af4-b776c79341f6-kube-api-access-7nl9v\") pod \"378c9b4b-6598-489c-9af4-b776c79341f6\" (UID: \"378c9b4b-6598-489c-9af4-b776c79341f6\") " Dec 12 14:15:40 crc kubenswrapper[5108]: I1212 14:15:40.338214 5108 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35559d5b-861e-4f71-b4fe-9cafa147f46b-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 14:15:40 crc kubenswrapper[5108]: I1212 14:15:40.338233 5108 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35559d5b-861e-4f71-b4fe-9cafa147f46b-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 14:15:40 crc kubenswrapper[5108]: I1212 14:15:40.338242 5108 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d7df9af-f2f2-4ff9-a6be-f3b20aa4f91b-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 14:15:40 crc kubenswrapper[5108]: I1212 14:15:40.338251 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-77fwk\" (UniqueName: \"kubernetes.io/projected/35559d5b-861e-4f71-b4fe-9cafa147f46b-kube-api-access-77fwk\") on node \"crc\" DevicePath \"\"" Dec 12 14:15:40 crc kubenswrapper[5108]: I1212 14:15:40.338261 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-bmgvh\" (UniqueName: \"kubernetes.io/projected/9d7df9af-f2f2-4ff9-a6be-f3b20aa4f91b-kube-api-access-bmgvh\") on node \"crc\" DevicePath \"\"" Dec 12 14:15:40 crc kubenswrapper[5108]: I1212 14:15:40.338270 5108 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d7df9af-f2f2-4ff9-a6be-f3b20aa4f91b-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 14:15:40 crc kubenswrapper[5108]: I1212 14:15:40.339563 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/378c9b4b-6598-489c-9af4-b776c79341f6-tmp" (OuterVolumeSpecName: "tmp") pod "378c9b4b-6598-489c-9af4-b776c79341f6" (UID: "378c9b4b-6598-489c-9af4-b776c79341f6"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:15:40 crc kubenswrapper[5108]: I1212 14:15:40.339644 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/378c9b4b-6598-489c-9af4-b776c79341f6-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "378c9b4b-6598-489c-9af4-b776c79341f6" (UID: "378c9b4b-6598-489c-9af4-b776c79341f6"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:15:40 crc kubenswrapper[5108]: I1212 14:15:40.340121 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f963a2e4-7bac-4938-ba67-f65a48ac4806-utilities" (OuterVolumeSpecName: "utilities") pod "f963a2e4-7bac-4938-ba67-f65a48ac4806" (UID: "f963a2e4-7bac-4938-ba67-f65a48ac4806"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:15:40 crc kubenswrapper[5108]: I1212 14:15:40.340433 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b05ea99a-815e-48ce-b4bb-1efda1405964-utilities" (OuterVolumeSpecName: "utilities") pod "b05ea99a-815e-48ce-b4bb-1efda1405964" (UID: "b05ea99a-815e-48ce-b4bb-1efda1405964"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:15:40 crc kubenswrapper[5108]: I1212 14:15:40.344539 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b05ea99a-815e-48ce-b4bb-1efda1405964-kube-api-access-xkq4k" (OuterVolumeSpecName: "kube-api-access-xkq4k") pod "b05ea99a-815e-48ce-b4bb-1efda1405964" (UID: "b05ea99a-815e-48ce-b4bb-1efda1405964"). InnerVolumeSpecName "kube-api-access-xkq4k". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:15:40 crc kubenswrapper[5108]: I1212 14:15:40.345815 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/378c9b4b-6598-489c-9af4-b776c79341f6-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "378c9b4b-6598-489c-9af4-b776c79341f6" (UID: "378c9b4b-6598-489c-9af4-b776c79341f6"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:15:40 crc kubenswrapper[5108]: I1212 14:15:40.347108 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/378c9b4b-6598-489c-9af4-b776c79341f6-kube-api-access-7nl9v" (OuterVolumeSpecName: "kube-api-access-7nl9v") pod "378c9b4b-6598-489c-9af4-b776c79341f6" (UID: "378c9b4b-6598-489c-9af4-b776c79341f6"). InnerVolumeSpecName "kube-api-access-7nl9v". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:15:40 crc kubenswrapper[5108]: I1212 14:15:40.348534 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f963a2e4-7bac-4938-ba67-f65a48ac4806-kube-api-access-gz7v5" (OuterVolumeSpecName: "kube-api-access-gz7v5") pod "f963a2e4-7bac-4938-ba67-f65a48ac4806" (UID: "f963a2e4-7bac-4938-ba67-f65a48ac4806"). InnerVolumeSpecName "kube-api-access-gz7v5". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:15:40 crc kubenswrapper[5108]: I1212 14:15:40.396367 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f963a2e4-7bac-4938-ba67-f65a48ac4806-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f963a2e4-7bac-4938-ba67-f65a48ac4806" (UID: "f963a2e4-7bac-4938-ba67-f65a48ac4806"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:15:40 crc kubenswrapper[5108]: I1212 14:15:40.399891 5108 generic.go:358] "Generic (PLEG): container finished" podID="378c9b4b-6598-489c-9af4-b776c79341f6" containerID="da0cdb6c8a79d498fe6f1fc177d7e076f7294e841470d4e790506397d0fbda6c" exitCode=0 Dec 12 14:15:40 crc kubenswrapper[5108]: I1212 14:15:40.399968 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-l5t96" event={"ID":"378c9b4b-6598-489c-9af4-b776c79341f6","Type":"ContainerDied","Data":"da0cdb6c8a79d498fe6f1fc177d7e076f7294e841470d4e790506397d0fbda6c"} Dec 12 14:15:40 crc kubenswrapper[5108]: I1212 14:15:40.399992 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-l5t96" event={"ID":"378c9b4b-6598-489c-9af4-b776c79341f6","Type":"ContainerDied","Data":"c646abea4c0fb59fba13aac90afe0b60fc08dda1694f6701aa7ff0f882bb9abe"} Dec 12 14:15:40 crc kubenswrapper[5108]: I1212 14:15:40.400007 5108 scope.go:117] "RemoveContainer" containerID="da0cdb6c8a79d498fe6f1fc177d7e076f7294e841470d4e790506397d0fbda6c" Dec 12 14:15:40 crc kubenswrapper[5108]: I1212 14:15:40.400137 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-l5t96" Dec 12 14:15:40 crc kubenswrapper[5108]: I1212 14:15:40.409934 5108 generic.go:358] "Generic (PLEG): container finished" podID="f963a2e4-7bac-4938-ba67-f65a48ac4806" containerID="163f3d453023834c4a791e6d5adf23994745596c3991639c936be6dd47b3323a" exitCode=0 Dec 12 14:15:40 crc kubenswrapper[5108]: I1212 14:15:40.410002 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-99vmq" event={"ID":"f963a2e4-7bac-4938-ba67-f65a48ac4806","Type":"ContainerDied","Data":"163f3d453023834c4a791e6d5adf23994745596c3991639c936be6dd47b3323a"} Dec 12 14:15:40 crc kubenswrapper[5108]: I1212 14:15:40.410028 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-99vmq" event={"ID":"f963a2e4-7bac-4938-ba67-f65a48ac4806","Type":"ContainerDied","Data":"01d56284ae79f428e9481b467b2bc25d65b4e72dcc7e5e8f266c5823c6f973b7"} Dec 12 14:15:40 crc kubenswrapper[5108]: I1212 14:15:40.410106 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-99vmq" Dec 12 14:15:40 crc kubenswrapper[5108]: I1212 14:15:40.419310 5108 generic.go:358] "Generic (PLEG): container finished" podID="b05ea99a-815e-48ce-b4bb-1efda1405964" containerID="e42522eda8b1a23c955d0531f95ea2f76d098d4393146dd84878a438d3bbc73f" exitCode=0 Dec 12 14:15:40 crc kubenswrapper[5108]: I1212 14:15:40.419427 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bfb2d" event={"ID":"b05ea99a-815e-48ce-b4bb-1efda1405964","Type":"ContainerDied","Data":"e42522eda8b1a23c955d0531f95ea2f76d098d4393146dd84878a438d3bbc73f"} Dec 12 14:15:40 crc kubenswrapper[5108]: I1212 14:15:40.419436 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bfb2d" Dec 12 14:15:40 crc kubenswrapper[5108]: I1212 14:15:40.419458 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bfb2d" event={"ID":"b05ea99a-815e-48ce-b4bb-1efda1405964","Type":"ContainerDied","Data":"7850374c1e90f0cc1f39c46484c75677e78c8279d1653fda4f94dd2cf5983edd"} Dec 12 14:15:40 crc kubenswrapper[5108]: I1212 14:15:40.422270 5108 generic.go:358] "Generic (PLEG): container finished" podID="35559d5b-861e-4f71-b4fe-9cafa147f46b" containerID="e9caf7c47d887ac7bfb7a1315292f0be164ca2255932db9a3edae12b4b6e8a24" exitCode=0 Dec 12 14:15:40 crc kubenswrapper[5108]: I1212 14:15:40.422672 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fg9h4" Dec 12 14:15:40 crc kubenswrapper[5108]: I1212 14:15:40.424016 5108 scope.go:117] "RemoveContainer" containerID="da0cdb6c8a79d498fe6f1fc177d7e076f7294e841470d4e790506397d0fbda6c" Dec 12 14:15:40 crc kubenswrapper[5108]: I1212 14:15:40.424164 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fg9h4" event={"ID":"35559d5b-861e-4f71-b4fe-9cafa147f46b","Type":"ContainerDied","Data":"e9caf7c47d887ac7bfb7a1315292f0be164ca2255932db9a3edae12b4b6e8a24"} Dec 12 14:15:40 crc kubenswrapper[5108]: I1212 14:15:40.424232 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fg9h4" event={"ID":"35559d5b-861e-4f71-b4fe-9cafa147f46b","Type":"ContainerDied","Data":"3fc4549efc55c994b44aa22e5d065827a82f871624e8c12c772b1e1b7fd860fa"} Dec 12 14:15:40 crc kubenswrapper[5108]: E1212 14:15:40.424802 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da0cdb6c8a79d498fe6f1fc177d7e076f7294e841470d4e790506397d0fbda6c\": container with ID starting with da0cdb6c8a79d498fe6f1fc177d7e076f7294e841470d4e790506397d0fbda6c not found: ID does not exist" containerID="da0cdb6c8a79d498fe6f1fc177d7e076f7294e841470d4e790506397d0fbda6c" Dec 12 14:15:40 crc kubenswrapper[5108]: I1212 14:15:40.424834 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da0cdb6c8a79d498fe6f1fc177d7e076f7294e841470d4e790506397d0fbda6c"} err="failed to get container status \"da0cdb6c8a79d498fe6f1fc177d7e076f7294e841470d4e790506397d0fbda6c\": rpc error: code = NotFound desc = could not find container \"da0cdb6c8a79d498fe6f1fc177d7e076f7294e841470d4e790506397d0fbda6c\": container with ID starting with da0cdb6c8a79d498fe6f1fc177d7e076f7294e841470d4e790506397d0fbda6c not found: ID does not exist" Dec 12 14:15:40 crc kubenswrapper[5108]: I1212 14:15:40.424854 5108 scope.go:117] "RemoveContainer" containerID="163f3d453023834c4a791e6d5adf23994745596c3991639c936be6dd47b3323a" Dec 12 14:15:40 crc kubenswrapper[5108]: I1212 14:15:40.432325 5108 generic.go:358] "Generic (PLEG): container finished" podID="9d7df9af-f2f2-4ff9-a6be-f3b20aa4f91b" containerID="b3361c9227c5e27461010ad1c5da9291ec9958a2e4ead2eac2a43aab3afbe747" exitCode=0 Dec 12 14:15:40 crc kubenswrapper[5108]: I1212 14:15:40.432360 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ddbtk" event={"ID":"9d7df9af-f2f2-4ff9-a6be-f3b20aa4f91b","Type":"ContainerDied","Data":"b3361c9227c5e27461010ad1c5da9291ec9958a2e4ead2eac2a43aab3afbe747"} Dec 12 14:15:40 crc kubenswrapper[5108]: I1212 14:15:40.432398 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ddbtk" event={"ID":"9d7df9af-f2f2-4ff9-a6be-f3b20aa4f91b","Type":"ContainerDied","Data":"0e31086cafa9b75dc3c21944198504050830f8e4516fd5e9877ff4bf500801a3"} Dec 12 14:15:40 crc kubenswrapper[5108]: I1212 14:15:40.432428 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ddbtk" Dec 12 14:15:40 crc kubenswrapper[5108]: I1212 14:15:40.438186 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-l5t96"] Dec 12 14:15:40 crc kubenswrapper[5108]: I1212 14:15:40.440842 5108 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f963a2e4-7bac-4938-ba67-f65a48ac4806-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 14:15:40 crc kubenswrapper[5108]: I1212 14:15:40.440886 5108 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/378c9b4b-6598-489c-9af4-b776c79341f6-tmp\") on node \"crc\" DevicePath \"\"" Dec 12 14:15:40 crc kubenswrapper[5108]: I1212 14:15:40.440897 5108 reconciler_common.go:299] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/378c9b4b-6598-489c-9af4-b776c79341f6-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 12 14:15:40 crc kubenswrapper[5108]: I1212 14:15:40.440907 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-gz7v5\" (UniqueName: \"kubernetes.io/projected/f963a2e4-7bac-4938-ba67-f65a48ac4806-kube-api-access-gz7v5\") on node \"crc\" DevicePath \"\"" Dec 12 14:15:40 crc kubenswrapper[5108]: I1212 14:15:40.440916 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7nl9v\" (UniqueName: \"kubernetes.io/projected/378c9b4b-6598-489c-9af4-b776c79341f6-kube-api-access-7nl9v\") on node \"crc\" DevicePath \"\"" Dec 12 14:15:40 crc kubenswrapper[5108]: I1212 14:15:40.440924 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xkq4k\" (UniqueName: \"kubernetes.io/projected/b05ea99a-815e-48ce-b4bb-1efda1405964-kube-api-access-xkq4k\") on node \"crc\" DevicePath \"\"" Dec 12 14:15:40 crc kubenswrapper[5108]: I1212 14:15:40.440932 5108 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f963a2e4-7bac-4938-ba67-f65a48ac4806-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 14:15:40 crc kubenswrapper[5108]: I1212 14:15:40.440941 5108 reconciler_common.go:299] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/378c9b4b-6598-489c-9af4-b776c79341f6-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 12 14:15:40 crc kubenswrapper[5108]: I1212 14:15:40.440950 5108 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b05ea99a-815e-48ce-b4bb-1efda1405964-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 14:15:40 crc kubenswrapper[5108]: I1212 14:15:40.448880 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-l5t96"] Dec 12 14:15:40 crc kubenswrapper[5108]: I1212 14:15:40.468044 5108 scope.go:117] "RemoveContainer" containerID="fc083a67eac7493fd88b19b61ab3f61efd79f441d4f5d2b65e8bc224fbb5d38c" Dec 12 14:15:40 crc kubenswrapper[5108]: I1212 14:15:40.471110 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-99vmq"] Dec 12 14:15:40 crc kubenswrapper[5108]: I1212 14:15:40.478645 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b05ea99a-815e-48ce-b4bb-1efda1405964-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b05ea99a-815e-48ce-b4bb-1efda1405964" (UID: "b05ea99a-815e-48ce-b4bb-1efda1405964"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:15:40 crc kubenswrapper[5108]: I1212 14:15:40.478919 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-99vmq"] Dec 12 14:15:40 crc kubenswrapper[5108]: I1212 14:15:40.482126 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fg9h4"] Dec 12 14:15:40 crc kubenswrapper[5108]: I1212 14:15:40.484793 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-fg9h4"] Dec 12 14:15:40 crc kubenswrapper[5108]: I1212 14:15:40.487704 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ddbtk"] Dec 12 14:15:40 crc kubenswrapper[5108]: I1212 14:15:40.490526 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-ddbtk"] Dec 12 14:15:40 crc kubenswrapper[5108]: I1212 14:15:40.504415 5108 scope.go:117] "RemoveContainer" containerID="ba6fc236d3cc2a58b4364567041ba1cbb3ac5ab922093e51c36518e5e2a86e7d" Dec 12 14:15:40 crc kubenswrapper[5108]: I1212 14:15:40.517923 5108 scope.go:117] "RemoveContainer" containerID="163f3d453023834c4a791e6d5adf23994745596c3991639c936be6dd47b3323a" Dec 12 14:15:40 crc kubenswrapper[5108]: E1212 14:15:40.518464 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"163f3d453023834c4a791e6d5adf23994745596c3991639c936be6dd47b3323a\": container with ID starting with 163f3d453023834c4a791e6d5adf23994745596c3991639c936be6dd47b3323a not found: ID does not exist" containerID="163f3d453023834c4a791e6d5adf23994745596c3991639c936be6dd47b3323a" Dec 12 14:15:40 crc kubenswrapper[5108]: I1212 14:15:40.518522 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"163f3d453023834c4a791e6d5adf23994745596c3991639c936be6dd47b3323a"} err="failed to get container status \"163f3d453023834c4a791e6d5adf23994745596c3991639c936be6dd47b3323a\": rpc error: code = NotFound desc = could not find container \"163f3d453023834c4a791e6d5adf23994745596c3991639c936be6dd47b3323a\": container with ID starting with 163f3d453023834c4a791e6d5adf23994745596c3991639c936be6dd47b3323a not found: ID does not exist" Dec 12 14:15:40 crc kubenswrapper[5108]: I1212 14:15:40.518563 5108 scope.go:117] "RemoveContainer" containerID="fc083a67eac7493fd88b19b61ab3f61efd79f441d4f5d2b65e8bc224fbb5d38c" Dec 12 14:15:40 crc kubenswrapper[5108]: E1212 14:15:40.518914 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc083a67eac7493fd88b19b61ab3f61efd79f441d4f5d2b65e8bc224fbb5d38c\": container with ID starting with fc083a67eac7493fd88b19b61ab3f61efd79f441d4f5d2b65e8bc224fbb5d38c not found: ID does not exist" containerID="fc083a67eac7493fd88b19b61ab3f61efd79f441d4f5d2b65e8bc224fbb5d38c" Dec 12 14:15:40 crc kubenswrapper[5108]: I1212 14:15:40.519054 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc083a67eac7493fd88b19b61ab3f61efd79f441d4f5d2b65e8bc224fbb5d38c"} err="failed to get container status \"fc083a67eac7493fd88b19b61ab3f61efd79f441d4f5d2b65e8bc224fbb5d38c\": rpc error: code = NotFound desc = could not find container \"fc083a67eac7493fd88b19b61ab3f61efd79f441d4f5d2b65e8bc224fbb5d38c\": container with ID starting with fc083a67eac7493fd88b19b61ab3f61efd79f441d4f5d2b65e8bc224fbb5d38c not found: ID does not exist" Dec 12 14:15:40 crc kubenswrapper[5108]: I1212 14:15:40.519192 5108 scope.go:117] "RemoveContainer" containerID="ba6fc236d3cc2a58b4364567041ba1cbb3ac5ab922093e51c36518e5e2a86e7d" Dec 12 14:15:40 crc kubenswrapper[5108]: E1212 14:15:40.519514 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba6fc236d3cc2a58b4364567041ba1cbb3ac5ab922093e51c36518e5e2a86e7d\": container with ID starting with ba6fc236d3cc2a58b4364567041ba1cbb3ac5ab922093e51c36518e5e2a86e7d not found: ID does not exist" containerID="ba6fc236d3cc2a58b4364567041ba1cbb3ac5ab922093e51c36518e5e2a86e7d" Dec 12 14:15:40 crc kubenswrapper[5108]: I1212 14:15:40.519540 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba6fc236d3cc2a58b4364567041ba1cbb3ac5ab922093e51c36518e5e2a86e7d"} err="failed to get container status \"ba6fc236d3cc2a58b4364567041ba1cbb3ac5ab922093e51c36518e5e2a86e7d\": rpc error: code = NotFound desc = could not find container \"ba6fc236d3cc2a58b4364567041ba1cbb3ac5ab922093e51c36518e5e2a86e7d\": container with ID starting with ba6fc236d3cc2a58b4364567041ba1cbb3ac5ab922093e51c36518e5e2a86e7d not found: ID does not exist" Dec 12 14:15:40 crc kubenswrapper[5108]: I1212 14:15:40.519556 5108 scope.go:117] "RemoveContainer" containerID="e42522eda8b1a23c955d0531f95ea2f76d098d4393146dd84878a438d3bbc73f" Dec 12 14:15:40 crc kubenswrapper[5108]: I1212 14:15:40.531201 5108 scope.go:117] "RemoveContainer" containerID="d0cb8d3decd5ecbf498a8b6574a4a5d0b4339e871b354faf50eb562249ce04e9" Dec 12 14:15:40 crc kubenswrapper[5108]: I1212 14:15:40.542354 5108 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b05ea99a-815e-48ce-b4bb-1efda1405964-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 14:15:40 crc kubenswrapper[5108]: I1212 14:15:40.561810 5108 scope.go:117] "RemoveContainer" containerID="bbd3d08b160b28d5cae7a156960429912919026f4ad7698866d633631c7fe168" Dec 12 14:15:40 crc kubenswrapper[5108]: I1212 14:15:40.577856 5108 scope.go:117] "RemoveContainer" containerID="e42522eda8b1a23c955d0531f95ea2f76d098d4393146dd84878a438d3bbc73f" Dec 12 14:15:40 crc kubenswrapper[5108]: E1212 14:15:40.578631 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e42522eda8b1a23c955d0531f95ea2f76d098d4393146dd84878a438d3bbc73f\": container with ID starting with e42522eda8b1a23c955d0531f95ea2f76d098d4393146dd84878a438d3bbc73f not found: ID does not exist" containerID="e42522eda8b1a23c955d0531f95ea2f76d098d4393146dd84878a438d3bbc73f" Dec 12 14:15:40 crc kubenswrapper[5108]: I1212 14:15:40.578674 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e42522eda8b1a23c955d0531f95ea2f76d098d4393146dd84878a438d3bbc73f"} err="failed to get container status \"e42522eda8b1a23c955d0531f95ea2f76d098d4393146dd84878a438d3bbc73f\": rpc error: code = NotFound desc = could not find container \"e42522eda8b1a23c955d0531f95ea2f76d098d4393146dd84878a438d3bbc73f\": container with ID starting with e42522eda8b1a23c955d0531f95ea2f76d098d4393146dd84878a438d3bbc73f not found: ID does not exist" Dec 12 14:15:40 crc kubenswrapper[5108]: I1212 14:15:40.578705 5108 scope.go:117] "RemoveContainer" containerID="d0cb8d3decd5ecbf498a8b6574a4a5d0b4339e871b354faf50eb562249ce04e9" Dec 12 14:15:40 crc kubenswrapper[5108]: E1212 14:15:40.579009 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d0cb8d3decd5ecbf498a8b6574a4a5d0b4339e871b354faf50eb562249ce04e9\": container with ID starting with d0cb8d3decd5ecbf498a8b6574a4a5d0b4339e871b354faf50eb562249ce04e9 not found: ID does not exist" containerID="d0cb8d3decd5ecbf498a8b6574a4a5d0b4339e871b354faf50eb562249ce04e9" Dec 12 14:15:40 crc kubenswrapper[5108]: I1212 14:15:40.579126 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0cb8d3decd5ecbf498a8b6574a4a5d0b4339e871b354faf50eb562249ce04e9"} err="failed to get container status \"d0cb8d3decd5ecbf498a8b6574a4a5d0b4339e871b354faf50eb562249ce04e9\": rpc error: code = NotFound desc = could not find container \"d0cb8d3decd5ecbf498a8b6574a4a5d0b4339e871b354faf50eb562249ce04e9\": container with ID starting with d0cb8d3decd5ecbf498a8b6574a4a5d0b4339e871b354faf50eb562249ce04e9 not found: ID does not exist" Dec 12 14:15:40 crc kubenswrapper[5108]: I1212 14:15:40.579210 5108 scope.go:117] "RemoveContainer" containerID="bbd3d08b160b28d5cae7a156960429912919026f4ad7698866d633631c7fe168" Dec 12 14:15:40 crc kubenswrapper[5108]: E1212 14:15:40.579621 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bbd3d08b160b28d5cae7a156960429912919026f4ad7698866d633631c7fe168\": container with ID starting with bbd3d08b160b28d5cae7a156960429912919026f4ad7698866d633631c7fe168 not found: ID does not exist" containerID="bbd3d08b160b28d5cae7a156960429912919026f4ad7698866d633631c7fe168" Dec 12 14:15:40 crc kubenswrapper[5108]: I1212 14:15:40.579661 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bbd3d08b160b28d5cae7a156960429912919026f4ad7698866d633631c7fe168"} err="failed to get container status \"bbd3d08b160b28d5cae7a156960429912919026f4ad7698866d633631c7fe168\": rpc error: code = NotFound desc = could not find container \"bbd3d08b160b28d5cae7a156960429912919026f4ad7698866d633631c7fe168\": container with ID starting with bbd3d08b160b28d5cae7a156960429912919026f4ad7698866d633631c7fe168 not found: ID does not exist" Dec 12 14:15:40 crc kubenswrapper[5108]: I1212 14:15:40.579677 5108 scope.go:117] "RemoveContainer" containerID="e9caf7c47d887ac7bfb7a1315292f0be164ca2255932db9a3edae12b4b6e8a24" Dec 12 14:15:40 crc kubenswrapper[5108]: I1212 14:15:40.596144 5108 scope.go:117] "RemoveContainer" containerID="9a52139b6b4a89f3b720753b38b4d8a0f1998e531cab1c0665136bbb3d120565" Dec 12 14:15:40 crc kubenswrapper[5108]: I1212 14:15:40.612643 5108 scope.go:117] "RemoveContainer" containerID="1b9bd7ff7ac5a758b677f327df4177219d2578d07dbf2958fd567532221ebf37" Dec 12 14:15:40 crc kubenswrapper[5108]: I1212 14:15:40.629639 5108 scope.go:117] "RemoveContainer" containerID="e9caf7c47d887ac7bfb7a1315292f0be164ca2255932db9a3edae12b4b6e8a24" Dec 12 14:15:40 crc kubenswrapper[5108]: E1212 14:15:40.630176 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e9caf7c47d887ac7bfb7a1315292f0be164ca2255932db9a3edae12b4b6e8a24\": container with ID starting with e9caf7c47d887ac7bfb7a1315292f0be164ca2255932db9a3edae12b4b6e8a24 not found: ID does not exist" containerID="e9caf7c47d887ac7bfb7a1315292f0be164ca2255932db9a3edae12b4b6e8a24" Dec 12 14:15:40 crc kubenswrapper[5108]: I1212 14:15:40.630210 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9caf7c47d887ac7bfb7a1315292f0be164ca2255932db9a3edae12b4b6e8a24"} err="failed to get container status \"e9caf7c47d887ac7bfb7a1315292f0be164ca2255932db9a3edae12b4b6e8a24\": rpc error: code = NotFound desc = could not find container \"e9caf7c47d887ac7bfb7a1315292f0be164ca2255932db9a3edae12b4b6e8a24\": container with ID starting with e9caf7c47d887ac7bfb7a1315292f0be164ca2255932db9a3edae12b4b6e8a24 not found: ID does not exist" Dec 12 14:15:40 crc kubenswrapper[5108]: I1212 14:15:40.630237 5108 scope.go:117] "RemoveContainer" containerID="9a52139b6b4a89f3b720753b38b4d8a0f1998e531cab1c0665136bbb3d120565" Dec 12 14:15:40 crc kubenswrapper[5108]: E1212 14:15:40.630719 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a52139b6b4a89f3b720753b38b4d8a0f1998e531cab1c0665136bbb3d120565\": container with ID starting with 9a52139b6b4a89f3b720753b38b4d8a0f1998e531cab1c0665136bbb3d120565 not found: ID does not exist" containerID="9a52139b6b4a89f3b720753b38b4d8a0f1998e531cab1c0665136bbb3d120565" Dec 12 14:15:40 crc kubenswrapper[5108]: I1212 14:15:40.630797 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a52139b6b4a89f3b720753b38b4d8a0f1998e531cab1c0665136bbb3d120565"} err="failed to get container status \"9a52139b6b4a89f3b720753b38b4d8a0f1998e531cab1c0665136bbb3d120565\": rpc error: code = NotFound desc = could not find container \"9a52139b6b4a89f3b720753b38b4d8a0f1998e531cab1c0665136bbb3d120565\": container with ID starting with 9a52139b6b4a89f3b720753b38b4d8a0f1998e531cab1c0665136bbb3d120565 not found: ID does not exist" Dec 12 14:15:40 crc kubenswrapper[5108]: I1212 14:15:40.630885 5108 scope.go:117] "RemoveContainer" containerID="1b9bd7ff7ac5a758b677f327df4177219d2578d07dbf2958fd567532221ebf37" Dec 12 14:15:40 crc kubenswrapper[5108]: E1212 14:15:40.631309 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b9bd7ff7ac5a758b677f327df4177219d2578d07dbf2958fd567532221ebf37\": container with ID starting with 1b9bd7ff7ac5a758b677f327df4177219d2578d07dbf2958fd567532221ebf37 not found: ID does not exist" containerID="1b9bd7ff7ac5a758b677f327df4177219d2578d07dbf2958fd567532221ebf37" Dec 12 14:15:40 crc kubenswrapper[5108]: I1212 14:15:40.631439 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b9bd7ff7ac5a758b677f327df4177219d2578d07dbf2958fd567532221ebf37"} err="failed to get container status \"1b9bd7ff7ac5a758b677f327df4177219d2578d07dbf2958fd567532221ebf37\": rpc error: code = NotFound desc = could not find container \"1b9bd7ff7ac5a758b677f327df4177219d2578d07dbf2958fd567532221ebf37\": container with ID starting with 1b9bd7ff7ac5a758b677f327df4177219d2578d07dbf2958fd567532221ebf37 not found: ID does not exist" Dec 12 14:15:40 crc kubenswrapper[5108]: I1212 14:15:40.631543 5108 scope.go:117] "RemoveContainer" containerID="b3361c9227c5e27461010ad1c5da9291ec9958a2e4ead2eac2a43aab3afbe747" Dec 12 14:15:40 crc kubenswrapper[5108]: I1212 14:15:40.648921 5108 scope.go:117] "RemoveContainer" containerID="2f7dfd14205244f55a9b5f38a797ce202802b1ea0af66d0c30e0142afd2dfd25" Dec 12 14:15:40 crc kubenswrapper[5108]: I1212 14:15:40.673183 5108 scope.go:117] "RemoveContainer" containerID="53a8c3d2a662cb73bfac4fe573850e730e9e334d891560c58bf36dad32fae0f4" Dec 12 14:15:40 crc kubenswrapper[5108]: I1212 14:15:40.689894 5108 scope.go:117] "RemoveContainer" containerID="b3361c9227c5e27461010ad1c5da9291ec9958a2e4ead2eac2a43aab3afbe747" Dec 12 14:15:40 crc kubenswrapper[5108]: E1212 14:15:40.690372 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3361c9227c5e27461010ad1c5da9291ec9958a2e4ead2eac2a43aab3afbe747\": container with ID starting with b3361c9227c5e27461010ad1c5da9291ec9958a2e4ead2eac2a43aab3afbe747 not found: ID does not exist" containerID="b3361c9227c5e27461010ad1c5da9291ec9958a2e4ead2eac2a43aab3afbe747" Dec 12 14:15:40 crc kubenswrapper[5108]: I1212 14:15:40.690411 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3361c9227c5e27461010ad1c5da9291ec9958a2e4ead2eac2a43aab3afbe747"} err="failed to get container status \"b3361c9227c5e27461010ad1c5da9291ec9958a2e4ead2eac2a43aab3afbe747\": rpc error: code = NotFound desc = could not find container \"b3361c9227c5e27461010ad1c5da9291ec9958a2e4ead2eac2a43aab3afbe747\": container with ID starting with b3361c9227c5e27461010ad1c5da9291ec9958a2e4ead2eac2a43aab3afbe747 not found: ID does not exist" Dec 12 14:15:40 crc kubenswrapper[5108]: I1212 14:15:40.690437 5108 scope.go:117] "RemoveContainer" containerID="2f7dfd14205244f55a9b5f38a797ce202802b1ea0af66d0c30e0142afd2dfd25" Dec 12 14:15:40 crc kubenswrapper[5108]: E1212 14:15:40.690825 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f7dfd14205244f55a9b5f38a797ce202802b1ea0af66d0c30e0142afd2dfd25\": container with ID starting with 2f7dfd14205244f55a9b5f38a797ce202802b1ea0af66d0c30e0142afd2dfd25 not found: ID does not exist" containerID="2f7dfd14205244f55a9b5f38a797ce202802b1ea0af66d0c30e0142afd2dfd25" Dec 12 14:15:40 crc kubenswrapper[5108]: I1212 14:15:40.690846 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f7dfd14205244f55a9b5f38a797ce202802b1ea0af66d0c30e0142afd2dfd25"} err="failed to get container status \"2f7dfd14205244f55a9b5f38a797ce202802b1ea0af66d0c30e0142afd2dfd25\": rpc error: code = NotFound desc = could not find container \"2f7dfd14205244f55a9b5f38a797ce202802b1ea0af66d0c30e0142afd2dfd25\": container with ID starting with 2f7dfd14205244f55a9b5f38a797ce202802b1ea0af66d0c30e0142afd2dfd25 not found: ID does not exist" Dec 12 14:15:40 crc kubenswrapper[5108]: I1212 14:15:40.690858 5108 scope.go:117] "RemoveContainer" containerID="53a8c3d2a662cb73bfac4fe573850e730e9e334d891560c58bf36dad32fae0f4" Dec 12 14:15:40 crc kubenswrapper[5108]: E1212 14:15:40.691489 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"53a8c3d2a662cb73bfac4fe573850e730e9e334d891560c58bf36dad32fae0f4\": container with ID starting with 53a8c3d2a662cb73bfac4fe573850e730e9e334d891560c58bf36dad32fae0f4 not found: ID does not exist" containerID="53a8c3d2a662cb73bfac4fe573850e730e9e334d891560c58bf36dad32fae0f4" Dec 12 14:15:40 crc kubenswrapper[5108]: I1212 14:15:40.691516 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53a8c3d2a662cb73bfac4fe573850e730e9e334d891560c58bf36dad32fae0f4"} err="failed to get container status \"53a8c3d2a662cb73bfac4fe573850e730e9e334d891560c58bf36dad32fae0f4\": rpc error: code = NotFound desc = could not find container \"53a8c3d2a662cb73bfac4fe573850e730e9e334d891560c58bf36dad32fae0f4\": container with ID starting with 53a8c3d2a662cb73bfac4fe573850e730e9e334d891560c58bf36dad32fae0f4 not found: ID does not exist" Dec 12 14:15:40 crc kubenswrapper[5108]: I1212 14:15:40.765527 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bfb2d"] Dec 12 14:15:40 crc kubenswrapper[5108]: I1212 14:15:40.769872 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-bfb2d"] Dec 12 14:15:41 crc kubenswrapper[5108]: I1212 14:15:41.082478 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-k2hqp"] Dec 12 14:15:41 crc kubenswrapper[5108]: I1212 14:15:41.083131 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f963a2e4-7bac-4938-ba67-f65a48ac4806" containerName="extract-utilities" Dec 12 14:15:41 crc kubenswrapper[5108]: I1212 14:15:41.083151 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="f963a2e4-7bac-4938-ba67-f65a48ac4806" containerName="extract-utilities" Dec 12 14:15:41 crc kubenswrapper[5108]: I1212 14:15:41.083167 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="35559d5b-861e-4f71-b4fe-9cafa147f46b" containerName="extract-content" Dec 12 14:15:41 crc kubenswrapper[5108]: I1212 14:15:41.083175 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="35559d5b-861e-4f71-b4fe-9cafa147f46b" containerName="extract-content" Dec 12 14:15:41 crc kubenswrapper[5108]: I1212 14:15:41.083187 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="35559d5b-861e-4f71-b4fe-9cafa147f46b" containerName="extract-utilities" Dec 12 14:15:41 crc kubenswrapper[5108]: I1212 14:15:41.083195 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="35559d5b-861e-4f71-b4fe-9cafa147f46b" containerName="extract-utilities" Dec 12 14:15:41 crc kubenswrapper[5108]: I1212 14:15:41.083206 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9d7df9af-f2f2-4ff9-a6be-f3b20aa4f91b" containerName="registry-server" Dec 12 14:15:41 crc kubenswrapper[5108]: I1212 14:15:41.083213 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d7df9af-f2f2-4ff9-a6be-f3b20aa4f91b" containerName="registry-server" Dec 12 14:15:41 crc kubenswrapper[5108]: I1212 14:15:41.083226 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9d7df9af-f2f2-4ff9-a6be-f3b20aa4f91b" containerName="extract-utilities" Dec 12 14:15:41 crc kubenswrapper[5108]: I1212 14:15:41.083233 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d7df9af-f2f2-4ff9-a6be-f3b20aa4f91b" containerName="extract-utilities" Dec 12 14:15:41 crc kubenswrapper[5108]: I1212 14:15:41.083243 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="35559d5b-861e-4f71-b4fe-9cafa147f46b" containerName="registry-server" Dec 12 14:15:41 crc kubenswrapper[5108]: I1212 14:15:41.083250 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="35559d5b-861e-4f71-b4fe-9cafa147f46b" containerName="registry-server" Dec 12 14:15:41 crc kubenswrapper[5108]: I1212 14:15:41.083256 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b05ea99a-815e-48ce-b4bb-1efda1405964" containerName="registry-server" Dec 12 14:15:41 crc kubenswrapper[5108]: I1212 14:15:41.083263 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="b05ea99a-815e-48ce-b4bb-1efda1405964" containerName="registry-server" Dec 12 14:15:41 crc kubenswrapper[5108]: I1212 14:15:41.083280 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9d7df9af-f2f2-4ff9-a6be-f3b20aa4f91b" containerName="extract-content" Dec 12 14:15:41 crc kubenswrapper[5108]: I1212 14:15:41.083287 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d7df9af-f2f2-4ff9-a6be-f3b20aa4f91b" containerName="extract-content" Dec 12 14:15:41 crc kubenswrapper[5108]: I1212 14:15:41.083296 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f963a2e4-7bac-4938-ba67-f65a48ac4806" containerName="registry-server" Dec 12 14:15:41 crc kubenswrapper[5108]: I1212 14:15:41.083302 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="f963a2e4-7bac-4938-ba67-f65a48ac4806" containerName="registry-server" Dec 12 14:15:41 crc kubenswrapper[5108]: I1212 14:15:41.083311 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b05ea99a-815e-48ce-b4bb-1efda1405964" containerName="extract-utilities" Dec 12 14:15:41 crc kubenswrapper[5108]: I1212 14:15:41.083318 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="b05ea99a-815e-48ce-b4bb-1efda1405964" containerName="extract-utilities" Dec 12 14:15:41 crc kubenswrapper[5108]: I1212 14:15:41.083327 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="378c9b4b-6598-489c-9af4-b776c79341f6" containerName="marketplace-operator" Dec 12 14:15:41 crc kubenswrapper[5108]: I1212 14:15:41.083334 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="378c9b4b-6598-489c-9af4-b776c79341f6" containerName="marketplace-operator" Dec 12 14:15:41 crc kubenswrapper[5108]: I1212 14:15:41.083348 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b05ea99a-815e-48ce-b4bb-1efda1405964" containerName="extract-content" Dec 12 14:15:41 crc kubenswrapper[5108]: I1212 14:15:41.083354 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="b05ea99a-815e-48ce-b4bb-1efda1405964" containerName="extract-content" Dec 12 14:15:41 crc kubenswrapper[5108]: I1212 14:15:41.083362 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f963a2e4-7bac-4938-ba67-f65a48ac4806" containerName="extract-content" Dec 12 14:15:41 crc kubenswrapper[5108]: I1212 14:15:41.083371 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="f963a2e4-7bac-4938-ba67-f65a48ac4806" containerName="extract-content" Dec 12 14:15:41 crc kubenswrapper[5108]: I1212 14:15:41.083388 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d58d8628-3aff-4f44-a12d-0e2df4f3ad87" containerName="installer" Dec 12 14:15:41 crc kubenswrapper[5108]: I1212 14:15:41.083396 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="d58d8628-3aff-4f44-a12d-0e2df4f3ad87" containerName="installer" Dec 12 14:15:41 crc kubenswrapper[5108]: I1212 14:15:41.083408 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Dec 12 14:15:41 crc kubenswrapper[5108]: I1212 14:15:41.083415 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Dec 12 14:15:41 crc kubenswrapper[5108]: I1212 14:15:41.083508 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="f963a2e4-7bac-4938-ba67-f65a48ac4806" containerName="registry-server" Dec 12 14:15:41 crc kubenswrapper[5108]: I1212 14:15:41.083523 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="b05ea99a-815e-48ce-b4bb-1efda1405964" containerName="registry-server" Dec 12 14:15:41 crc kubenswrapper[5108]: I1212 14:15:41.083536 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="35559d5b-861e-4f71-b4fe-9cafa147f46b" containerName="registry-server" Dec 12 14:15:41 crc kubenswrapper[5108]: I1212 14:15:41.083545 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Dec 12 14:15:41 crc kubenswrapper[5108]: I1212 14:15:41.083553 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="d58d8628-3aff-4f44-a12d-0e2df4f3ad87" containerName="installer" Dec 12 14:15:41 crc kubenswrapper[5108]: I1212 14:15:41.083562 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="9d7df9af-f2f2-4ff9-a6be-f3b20aa4f91b" containerName="registry-server" Dec 12 14:15:41 crc kubenswrapper[5108]: I1212 14:15:41.083572 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="378c9b4b-6598-489c-9af4-b776c79341f6" containerName="marketplace-operator" Dec 12 14:15:41 crc kubenswrapper[5108]: I1212 14:15:41.091771 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k2hqp" Dec 12 14:15:41 crc kubenswrapper[5108]: I1212 14:15:41.093887 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-k2hqp"] Dec 12 14:15:41 crc kubenswrapper[5108]: I1212 14:15:41.094113 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"kube-root-ca.crt\"" Dec 12 14:15:41 crc kubenswrapper[5108]: I1212 14:15:41.094195 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"openshift-service-ca.crt\"" Dec 12 14:15:41 crc kubenswrapper[5108]: I1212 14:15:41.095575 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Dec 12 14:15:41 crc kubenswrapper[5108]: I1212 14:15:41.149128 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k72rj\" (UniqueName: \"kubernetes.io/projected/d71f524c-38b4-4eca-b9d2-a0dc97e4ef02-kube-api-access-k72rj\") pod \"community-operators-k2hqp\" (UID: \"d71f524c-38b4-4eca-b9d2-a0dc97e4ef02\") " pod="openshift-marketplace/community-operators-k2hqp" Dec 12 14:15:41 crc kubenswrapper[5108]: I1212 14:15:41.149196 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d71f524c-38b4-4eca-b9d2-a0dc97e4ef02-utilities\") pod \"community-operators-k2hqp\" (UID: \"d71f524c-38b4-4eca-b9d2-a0dc97e4ef02\") " pod="openshift-marketplace/community-operators-k2hqp" Dec 12 14:15:41 crc kubenswrapper[5108]: I1212 14:15:41.149386 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d71f524c-38b4-4eca-b9d2-a0dc97e4ef02-catalog-content\") pod \"community-operators-k2hqp\" (UID: \"d71f524c-38b4-4eca-b9d2-a0dc97e4ef02\") " pod="openshift-marketplace/community-operators-k2hqp" Dec 12 14:15:41 crc kubenswrapper[5108]: I1212 14:15:41.250973 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d71f524c-38b4-4eca-b9d2-a0dc97e4ef02-utilities\") pod \"community-operators-k2hqp\" (UID: \"d71f524c-38b4-4eca-b9d2-a0dc97e4ef02\") " pod="openshift-marketplace/community-operators-k2hqp" Dec 12 14:15:41 crc kubenswrapper[5108]: I1212 14:15:41.251038 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d71f524c-38b4-4eca-b9d2-a0dc97e4ef02-catalog-content\") pod \"community-operators-k2hqp\" (UID: \"d71f524c-38b4-4eca-b9d2-a0dc97e4ef02\") " pod="openshift-marketplace/community-operators-k2hqp" Dec 12 14:15:41 crc kubenswrapper[5108]: I1212 14:15:41.251106 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-k72rj\" (UniqueName: \"kubernetes.io/projected/d71f524c-38b4-4eca-b9d2-a0dc97e4ef02-kube-api-access-k72rj\") pod \"community-operators-k2hqp\" (UID: \"d71f524c-38b4-4eca-b9d2-a0dc97e4ef02\") " pod="openshift-marketplace/community-operators-k2hqp" Dec 12 14:15:41 crc kubenswrapper[5108]: I1212 14:15:41.251584 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d71f524c-38b4-4eca-b9d2-a0dc97e4ef02-catalog-content\") pod \"community-operators-k2hqp\" (UID: \"d71f524c-38b4-4eca-b9d2-a0dc97e4ef02\") " pod="openshift-marketplace/community-operators-k2hqp" Dec 12 14:15:41 crc kubenswrapper[5108]: I1212 14:15:41.251645 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d71f524c-38b4-4eca-b9d2-a0dc97e4ef02-utilities\") pod \"community-operators-k2hqp\" (UID: \"d71f524c-38b4-4eca-b9d2-a0dc97e4ef02\") " pod="openshift-marketplace/community-operators-k2hqp" Dec 12 14:15:41 crc kubenswrapper[5108]: I1212 14:15:41.271619 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-k72rj\" (UniqueName: \"kubernetes.io/projected/d71f524c-38b4-4eca-b9d2-a0dc97e4ef02-kube-api-access-k72rj\") pod \"community-operators-k2hqp\" (UID: \"d71f524c-38b4-4eca-b9d2-a0dc97e4ef02\") " pod="openshift-marketplace/community-operators-k2hqp" Dec 12 14:15:41 crc kubenswrapper[5108]: I1212 14:15:41.283167 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-p7mr7"] Dec 12 14:15:41 crc kubenswrapper[5108]: I1212 14:15:41.292566 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p7mr7" Dec 12 14:15:41 crc kubenswrapper[5108]: I1212 14:15:41.296043 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Dec 12 14:15:41 crc kubenswrapper[5108]: I1212 14:15:41.296525 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-p7mr7"] Dec 12 14:15:41 crc kubenswrapper[5108]: I1212 14:15:41.352226 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b817ef38-a171-4c25-95d7-f3e73a2c56c7-utilities\") pod \"certified-operators-p7mr7\" (UID: \"b817ef38-a171-4c25-95d7-f3e73a2c56c7\") " pod="openshift-marketplace/certified-operators-p7mr7" Dec 12 14:15:41 crc kubenswrapper[5108]: I1212 14:15:41.352433 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b817ef38-a171-4c25-95d7-f3e73a2c56c7-catalog-content\") pod \"certified-operators-p7mr7\" (UID: \"b817ef38-a171-4c25-95d7-f3e73a2c56c7\") " pod="openshift-marketplace/certified-operators-p7mr7" Dec 12 14:15:41 crc kubenswrapper[5108]: I1212 14:15:41.352616 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlnn6\" (UniqueName: \"kubernetes.io/projected/b817ef38-a171-4c25-95d7-f3e73a2c56c7-kube-api-access-dlnn6\") pod \"certified-operators-p7mr7\" (UID: \"b817ef38-a171-4c25-95d7-f3e73a2c56c7\") " pod="openshift-marketplace/certified-operators-p7mr7" Dec 12 14:15:41 crc kubenswrapper[5108]: I1212 14:15:41.409776 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k2hqp" Dec 12 14:15:41 crc kubenswrapper[5108]: I1212 14:15:41.417318 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35559d5b-861e-4f71-b4fe-9cafa147f46b" path="/var/lib/kubelet/pods/35559d5b-861e-4f71-b4fe-9cafa147f46b/volumes" Dec 12 14:15:41 crc kubenswrapper[5108]: I1212 14:15:41.418453 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="378c9b4b-6598-489c-9af4-b776c79341f6" path="/var/lib/kubelet/pods/378c9b4b-6598-489c-9af4-b776c79341f6/volumes" Dec 12 14:15:41 crc kubenswrapper[5108]: I1212 14:15:41.419128 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d7df9af-f2f2-4ff9-a6be-f3b20aa4f91b" path="/var/lib/kubelet/pods/9d7df9af-f2f2-4ff9-a6be-f3b20aa4f91b/volumes" Dec 12 14:15:41 crc kubenswrapper[5108]: I1212 14:15:41.420700 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b05ea99a-815e-48ce-b4bb-1efda1405964" path="/var/lib/kubelet/pods/b05ea99a-815e-48ce-b4bb-1efda1405964/volumes" Dec 12 14:15:41 crc kubenswrapper[5108]: I1212 14:15:41.421492 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f963a2e4-7bac-4938-ba67-f65a48ac4806" path="/var/lib/kubelet/pods/f963a2e4-7bac-4938-ba67-f65a48ac4806/volumes" Dec 12 14:15:41 crc kubenswrapper[5108]: I1212 14:15:41.454768 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b817ef38-a171-4c25-95d7-f3e73a2c56c7-catalog-content\") pod \"certified-operators-p7mr7\" (UID: \"b817ef38-a171-4c25-95d7-f3e73a2c56c7\") " pod="openshift-marketplace/certified-operators-p7mr7" Dec 12 14:15:41 crc kubenswrapper[5108]: I1212 14:15:41.454836 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dlnn6\" (UniqueName: \"kubernetes.io/projected/b817ef38-a171-4c25-95d7-f3e73a2c56c7-kube-api-access-dlnn6\") pod \"certified-operators-p7mr7\" (UID: \"b817ef38-a171-4c25-95d7-f3e73a2c56c7\") " pod="openshift-marketplace/certified-operators-p7mr7" Dec 12 14:15:41 crc kubenswrapper[5108]: I1212 14:15:41.454892 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b817ef38-a171-4c25-95d7-f3e73a2c56c7-utilities\") pod \"certified-operators-p7mr7\" (UID: \"b817ef38-a171-4c25-95d7-f3e73a2c56c7\") " pod="openshift-marketplace/certified-operators-p7mr7" Dec 12 14:15:41 crc kubenswrapper[5108]: I1212 14:15:41.455359 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b817ef38-a171-4c25-95d7-f3e73a2c56c7-catalog-content\") pod \"certified-operators-p7mr7\" (UID: \"b817ef38-a171-4c25-95d7-f3e73a2c56c7\") " pod="openshift-marketplace/certified-operators-p7mr7" Dec 12 14:15:41 crc kubenswrapper[5108]: I1212 14:15:41.456754 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b817ef38-a171-4c25-95d7-f3e73a2c56c7-utilities\") pod \"certified-operators-p7mr7\" (UID: \"b817ef38-a171-4c25-95d7-f3e73a2c56c7\") " pod="openshift-marketplace/certified-operators-p7mr7" Dec 12 14:15:41 crc kubenswrapper[5108]: I1212 14:15:41.476982 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dlnn6\" (UniqueName: \"kubernetes.io/projected/b817ef38-a171-4c25-95d7-f3e73a2c56c7-kube-api-access-dlnn6\") pod \"certified-operators-p7mr7\" (UID: \"b817ef38-a171-4c25-95d7-f3e73a2c56c7\") " pod="openshift-marketplace/certified-operators-p7mr7" Dec 12 14:15:41 crc kubenswrapper[5108]: I1212 14:15:41.622902 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p7mr7" Dec 12 14:15:41 crc kubenswrapper[5108]: I1212 14:15:41.653608 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-k2hqp"] Dec 12 14:15:41 crc kubenswrapper[5108]: I1212 14:15:41.829426 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-p7mr7"] Dec 12 14:15:41 crc kubenswrapper[5108]: W1212 14:15:41.838858 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb817ef38_a171_4c25_95d7_f3e73a2c56c7.slice/crio-f89cacec3329885d5b378b135dcefe4c5f149f3c3911a1ee96e5ca99ad99146c WatchSource:0}: Error finding container f89cacec3329885d5b378b135dcefe4c5f149f3c3911a1ee96e5ca99ad99146c: Status 404 returned error can't find the container with id f89cacec3329885d5b378b135dcefe4c5f149f3c3911a1ee96e5ca99ad99146c Dec 12 14:15:42 crc kubenswrapper[5108]: I1212 14:15:42.453296 5108 generic.go:358] "Generic (PLEG): container finished" podID="b817ef38-a171-4c25-95d7-f3e73a2c56c7" containerID="26b36da4a48184ad81aaea787095b18152ec892e74a5592124dcddba0e9c23db" exitCode=0 Dec 12 14:15:42 crc kubenswrapper[5108]: I1212 14:15:42.453576 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p7mr7" event={"ID":"b817ef38-a171-4c25-95d7-f3e73a2c56c7","Type":"ContainerDied","Data":"26b36da4a48184ad81aaea787095b18152ec892e74a5592124dcddba0e9c23db"} Dec 12 14:15:42 crc kubenswrapper[5108]: I1212 14:15:42.453715 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p7mr7" event={"ID":"b817ef38-a171-4c25-95d7-f3e73a2c56c7","Type":"ContainerStarted","Data":"f89cacec3329885d5b378b135dcefe4c5f149f3c3911a1ee96e5ca99ad99146c"} Dec 12 14:15:42 crc kubenswrapper[5108]: I1212 14:15:42.454883 5108 generic.go:358] "Generic (PLEG): container finished" podID="d71f524c-38b4-4eca-b9d2-a0dc97e4ef02" containerID="46ffc5c3c857f484ab8ca25f0a36d8eb6d196bcaec8e47c0fc1d78937b6d3dcb" exitCode=0 Dec 12 14:15:42 crc kubenswrapper[5108]: I1212 14:15:42.454931 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k2hqp" event={"ID":"d71f524c-38b4-4eca-b9d2-a0dc97e4ef02","Type":"ContainerDied","Data":"46ffc5c3c857f484ab8ca25f0a36d8eb6d196bcaec8e47c0fc1d78937b6d3dcb"} Dec 12 14:15:42 crc kubenswrapper[5108]: I1212 14:15:42.454950 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k2hqp" event={"ID":"d71f524c-38b4-4eca-b9d2-a0dc97e4ef02","Type":"ContainerStarted","Data":"f97287a7ba7dc5c44919a7e4ae645ef9e6d57c5029dd6c81d7e283bbe41a2c44"} Dec 12 14:15:42 crc kubenswrapper[5108]: I1212 14:15:42.555056 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-g8w8p"] Dec 12 14:15:42 crc kubenswrapper[5108]: I1212 14:15:42.569126 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-tx2lf"] Dec 12 14:15:42 crc kubenswrapper[5108]: I1212 14:15:42.569392 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-65b6cccf98-tx2lf" podUID="b6da6d66-adc4-4cd5-968f-21877a7820f0" containerName="controller-manager" containerID="cri-o://aecaeaac68eceea28f08563626f34d7d3f3563a3d970846498763e19a38337c2" gracePeriod=30 Dec 12 14:15:42 crc kubenswrapper[5108]: I1212 14:15:42.569561 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-g8w8p" Dec 12 14:15:42 crc kubenswrapper[5108]: I1212 14:15:42.574568 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-dockercfg-2cfkp\"" Dec 12 14:15:42 crc kubenswrapper[5108]: I1212 14:15:42.577001 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-wspns"] Dec 12 14:15:42 crc kubenswrapper[5108]: I1212 14:15:42.577242 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-wspns" podUID="a0eab168-419a-4cb1-b318-244a89a1af5e" containerName="route-controller-manager" containerID="cri-o://f206e2a1c591e80d92d202ba0f98e77da61db894efa3e1bc4ee38378594a8c11" gracePeriod=30 Dec 12 14:15:42 crc kubenswrapper[5108]: I1212 14:15:42.582912 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"marketplace-trusted-ca\"" Dec 12 14:15:42 crc kubenswrapper[5108]: I1212 14:15:42.583194 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-metrics\"" Dec 12 14:15:42 crc kubenswrapper[5108]: I1212 14:15:42.589242 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-g8w8p"] Dec 12 14:15:42 crc kubenswrapper[5108]: I1212 14:15:42.610692 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29425815-zsd42"] Dec 12 14:15:42 crc kubenswrapper[5108]: I1212 14:15:42.614946 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29425815-zsd42" Dec 12 14:15:42 crc kubenswrapper[5108]: I1212 14:15:42.621702 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Dec 12 14:15:42 crc kubenswrapper[5108]: I1212 14:15:42.621756 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Dec 12 14:15:42 crc kubenswrapper[5108]: I1212 14:15:42.622424 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-blzxz"] Dec 12 14:15:42 crc kubenswrapper[5108]: I1212 14:15:42.672398 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5be79318-038a-4e92-8837-dc46ede3d903-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-g8w8p\" (UID: \"5be79318-038a-4e92-8837-dc46ede3d903\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-g8w8p" Dec 12 14:15:42 crc kubenswrapper[5108]: I1212 14:15:42.673380 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpss7\" (UniqueName: \"kubernetes.io/projected/16c0779f-318f-4ca3-ae9f-6a4954d8d814-kube-api-access-bpss7\") pod \"collect-profiles-29425815-zsd42\" (UID: \"16c0779f-318f-4ca3-ae9f-6a4954d8d814\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425815-zsd42" Dec 12 14:15:42 crc kubenswrapper[5108]: I1212 14:15:42.673483 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/5be79318-038a-4e92-8837-dc46ede3d903-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-g8w8p\" (UID: \"5be79318-038a-4e92-8837-dc46ede3d903\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-g8w8p" Dec 12 14:15:42 crc kubenswrapper[5108]: I1212 14:15:42.673586 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/5be79318-038a-4e92-8837-dc46ede3d903-tmp\") pod \"marketplace-operator-547dbd544d-g8w8p\" (UID: \"5be79318-038a-4e92-8837-dc46ede3d903\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-g8w8p" Dec 12 14:15:42 crc kubenswrapper[5108]: I1212 14:15:42.673653 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/16c0779f-318f-4ca3-ae9f-6a4954d8d814-secret-volume\") pod \"collect-profiles-29425815-zsd42\" (UID: \"16c0779f-318f-4ca3-ae9f-6a4954d8d814\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425815-zsd42" Dec 12 14:15:42 crc kubenswrapper[5108]: I1212 14:15:42.673735 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/16c0779f-318f-4ca3-ae9f-6a4954d8d814-config-volume\") pod \"collect-profiles-29425815-zsd42\" (UID: \"16c0779f-318f-4ca3-ae9f-6a4954d8d814\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425815-zsd42" Dec 12 14:15:42 crc kubenswrapper[5108]: I1212 14:15:42.673798 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltd5f\" (UniqueName: \"kubernetes.io/projected/5be79318-038a-4e92-8837-dc46ede3d903-kube-api-access-ltd5f\") pod \"marketplace-operator-547dbd544d-g8w8p\" (UID: \"5be79318-038a-4e92-8837-dc46ede3d903\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-g8w8p" Dec 12 14:15:42 crc kubenswrapper[5108]: I1212 14:15:42.674900 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29425815-zsd42"] Dec 12 14:15:42 crc kubenswrapper[5108]: I1212 14:15:42.774587 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/5be79318-038a-4e92-8837-dc46ede3d903-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-g8w8p\" (UID: \"5be79318-038a-4e92-8837-dc46ede3d903\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-g8w8p" Dec 12 14:15:42 crc kubenswrapper[5108]: I1212 14:15:42.774648 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/5be79318-038a-4e92-8837-dc46ede3d903-tmp\") pod \"marketplace-operator-547dbd544d-g8w8p\" (UID: \"5be79318-038a-4e92-8837-dc46ede3d903\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-g8w8p" Dec 12 14:15:42 crc kubenswrapper[5108]: I1212 14:15:42.774668 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/16c0779f-318f-4ca3-ae9f-6a4954d8d814-secret-volume\") pod \"collect-profiles-29425815-zsd42\" (UID: \"16c0779f-318f-4ca3-ae9f-6a4954d8d814\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425815-zsd42" Dec 12 14:15:42 crc kubenswrapper[5108]: I1212 14:15:42.774696 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/16c0779f-318f-4ca3-ae9f-6a4954d8d814-config-volume\") pod \"collect-profiles-29425815-zsd42\" (UID: \"16c0779f-318f-4ca3-ae9f-6a4954d8d814\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425815-zsd42" Dec 12 14:15:42 crc kubenswrapper[5108]: I1212 14:15:42.774712 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ltd5f\" (UniqueName: \"kubernetes.io/projected/5be79318-038a-4e92-8837-dc46ede3d903-kube-api-access-ltd5f\") pod \"marketplace-operator-547dbd544d-g8w8p\" (UID: \"5be79318-038a-4e92-8837-dc46ede3d903\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-g8w8p" Dec 12 14:15:42 crc kubenswrapper[5108]: I1212 14:15:42.774733 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5be79318-038a-4e92-8837-dc46ede3d903-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-g8w8p\" (UID: \"5be79318-038a-4e92-8837-dc46ede3d903\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-g8w8p" Dec 12 14:15:42 crc kubenswrapper[5108]: I1212 14:15:42.774756 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bpss7\" (UniqueName: \"kubernetes.io/projected/16c0779f-318f-4ca3-ae9f-6a4954d8d814-kube-api-access-bpss7\") pod \"collect-profiles-29425815-zsd42\" (UID: \"16c0779f-318f-4ca3-ae9f-6a4954d8d814\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425815-zsd42" Dec 12 14:15:42 crc kubenswrapper[5108]: I1212 14:15:42.775851 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/5be79318-038a-4e92-8837-dc46ede3d903-tmp\") pod \"marketplace-operator-547dbd544d-g8w8p\" (UID: \"5be79318-038a-4e92-8837-dc46ede3d903\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-g8w8p" Dec 12 14:15:42 crc kubenswrapper[5108]: I1212 14:15:42.776437 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/16c0779f-318f-4ca3-ae9f-6a4954d8d814-config-volume\") pod \"collect-profiles-29425815-zsd42\" (UID: \"16c0779f-318f-4ca3-ae9f-6a4954d8d814\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425815-zsd42" Dec 12 14:15:42 crc kubenswrapper[5108]: I1212 14:15:42.776607 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5be79318-038a-4e92-8837-dc46ede3d903-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-g8w8p\" (UID: \"5be79318-038a-4e92-8837-dc46ede3d903\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-g8w8p" Dec 12 14:15:42 crc kubenswrapper[5108]: I1212 14:15:42.783545 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/16c0779f-318f-4ca3-ae9f-6a4954d8d814-secret-volume\") pod \"collect-profiles-29425815-zsd42\" (UID: \"16c0779f-318f-4ca3-ae9f-6a4954d8d814\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425815-zsd42" Dec 12 14:15:42 crc kubenswrapper[5108]: I1212 14:15:42.784169 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/5be79318-038a-4e92-8837-dc46ede3d903-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-g8w8p\" (UID: \"5be79318-038a-4e92-8837-dc46ede3d903\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-g8w8p" Dec 12 14:15:42 crc kubenswrapper[5108]: I1212 14:15:42.806468 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ltd5f\" (UniqueName: \"kubernetes.io/projected/5be79318-038a-4e92-8837-dc46ede3d903-kube-api-access-ltd5f\") pod \"marketplace-operator-547dbd544d-g8w8p\" (UID: \"5be79318-038a-4e92-8837-dc46ede3d903\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-g8w8p" Dec 12 14:15:42 crc kubenswrapper[5108]: I1212 14:15:42.825228 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bpss7\" (UniqueName: \"kubernetes.io/projected/16c0779f-318f-4ca3-ae9f-6a4954d8d814-kube-api-access-bpss7\") pod \"collect-profiles-29425815-zsd42\" (UID: \"16c0779f-318f-4ca3-ae9f-6a4954d8d814\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425815-zsd42" Dec 12 14:15:42 crc kubenswrapper[5108]: I1212 14:15:42.883932 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-mkdxn"] Dec 12 14:15:42 crc kubenswrapper[5108]: I1212 14:15:42.891015 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-g8w8p" Dec 12 14:15:42 crc kubenswrapper[5108]: I1212 14:15:42.925454 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mkdxn" Dec 12 14:15:42 crc kubenswrapper[5108]: I1212 14:15:42.927823 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mkdxn"] Dec 12 14:15:42 crc kubenswrapper[5108]: I1212 14:15:42.928281 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Dec 12 14:15:42 crc kubenswrapper[5108]: I1212 14:15:42.976729 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba7bd770-9185-40a3-ae63-961ee83bd38e-utilities\") pod \"redhat-marketplace-mkdxn\" (UID: \"ba7bd770-9185-40a3-ae63-961ee83bd38e\") " pod="openshift-marketplace/redhat-marketplace-mkdxn" Dec 12 14:15:42 crc kubenswrapper[5108]: I1212 14:15:42.976792 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba7bd770-9185-40a3-ae63-961ee83bd38e-catalog-content\") pod \"redhat-marketplace-mkdxn\" (UID: \"ba7bd770-9185-40a3-ae63-961ee83bd38e\") " pod="openshift-marketplace/redhat-marketplace-mkdxn" Dec 12 14:15:42 crc kubenswrapper[5108]: I1212 14:15:42.976822 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljvc6\" (UniqueName: \"kubernetes.io/projected/ba7bd770-9185-40a3-ae63-961ee83bd38e-kube-api-access-ljvc6\") pod \"redhat-marketplace-mkdxn\" (UID: \"ba7bd770-9185-40a3-ae63-961ee83bd38e\") " pod="openshift-marketplace/redhat-marketplace-mkdxn" Dec 12 14:15:43 crc kubenswrapper[5108]: I1212 14:15:43.076656 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29425815-zsd42" Dec 12 14:15:43 crc kubenswrapper[5108]: I1212 14:15:43.082002 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba7bd770-9185-40a3-ae63-961ee83bd38e-utilities\") pod \"redhat-marketplace-mkdxn\" (UID: \"ba7bd770-9185-40a3-ae63-961ee83bd38e\") " pod="openshift-marketplace/redhat-marketplace-mkdxn" Dec 12 14:15:43 crc kubenswrapper[5108]: I1212 14:15:43.082053 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba7bd770-9185-40a3-ae63-961ee83bd38e-catalog-content\") pod \"redhat-marketplace-mkdxn\" (UID: \"ba7bd770-9185-40a3-ae63-961ee83bd38e\") " pod="openshift-marketplace/redhat-marketplace-mkdxn" Dec 12 14:15:43 crc kubenswrapper[5108]: I1212 14:15:43.082095 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ljvc6\" (UniqueName: \"kubernetes.io/projected/ba7bd770-9185-40a3-ae63-961ee83bd38e-kube-api-access-ljvc6\") pod \"redhat-marketplace-mkdxn\" (UID: \"ba7bd770-9185-40a3-ae63-961ee83bd38e\") " pod="openshift-marketplace/redhat-marketplace-mkdxn" Dec 12 14:15:43 crc kubenswrapper[5108]: I1212 14:15:43.082548 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba7bd770-9185-40a3-ae63-961ee83bd38e-utilities\") pod \"redhat-marketplace-mkdxn\" (UID: \"ba7bd770-9185-40a3-ae63-961ee83bd38e\") " pod="openshift-marketplace/redhat-marketplace-mkdxn" Dec 12 14:15:43 crc kubenswrapper[5108]: I1212 14:15:43.082566 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba7bd770-9185-40a3-ae63-961ee83bd38e-catalog-content\") pod \"redhat-marketplace-mkdxn\" (UID: \"ba7bd770-9185-40a3-ae63-961ee83bd38e\") " pod="openshift-marketplace/redhat-marketplace-mkdxn" Dec 12 14:15:43 crc kubenswrapper[5108]: I1212 14:15:43.091634 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-tx2lf" Dec 12 14:15:43 crc kubenswrapper[5108]: I1212 14:15:43.105388 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljvc6\" (UniqueName: \"kubernetes.io/projected/ba7bd770-9185-40a3-ae63-961ee83bd38e-kube-api-access-ljvc6\") pod \"redhat-marketplace-mkdxn\" (UID: \"ba7bd770-9185-40a3-ae63-961ee83bd38e\") " pod="openshift-marketplace/redhat-marketplace-mkdxn" Dec 12 14:15:43 crc kubenswrapper[5108]: I1212 14:15:43.124216 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-8677dbb44d-xbgwl"] Dec 12 14:15:43 crc kubenswrapper[5108]: I1212 14:15:43.124734 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b6da6d66-adc4-4cd5-968f-21877a7820f0" containerName="controller-manager" Dec 12 14:15:43 crc kubenswrapper[5108]: I1212 14:15:43.124755 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6da6d66-adc4-4cd5-968f-21877a7820f0" containerName="controller-manager" Dec 12 14:15:43 crc kubenswrapper[5108]: I1212 14:15:43.124860 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="b6da6d66-adc4-4cd5-968f-21877a7820f0" containerName="controller-manager" Dec 12 14:15:43 crc kubenswrapper[5108]: I1212 14:15:43.150706 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-8677dbb44d-xbgwl"] Dec 12 14:15:43 crc kubenswrapper[5108]: I1212 14:15:43.151618 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8677dbb44d-xbgwl" Dec 12 14:15:43 crc kubenswrapper[5108]: I1212 14:15:43.182657 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6da6d66-adc4-4cd5-968f-21877a7820f0-config\") pod \"b6da6d66-adc4-4cd5-968f-21877a7820f0\" (UID: \"b6da6d66-adc4-4cd5-968f-21877a7820f0\") " Dec 12 14:15:43 crc kubenswrapper[5108]: I1212 14:15:43.182756 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b6da6d66-adc4-4cd5-968f-21877a7820f0-tmp\") pod \"b6da6d66-adc4-4cd5-968f-21877a7820f0\" (UID: \"b6da6d66-adc4-4cd5-968f-21877a7820f0\") " Dec 12 14:15:43 crc kubenswrapper[5108]: I1212 14:15:43.182798 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b6da6d66-adc4-4cd5-968f-21877a7820f0-client-ca\") pod \"b6da6d66-adc4-4cd5-968f-21877a7820f0\" (UID: \"b6da6d66-adc4-4cd5-968f-21877a7820f0\") " Dec 12 14:15:43 crc kubenswrapper[5108]: I1212 14:15:43.182832 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b6da6d66-adc4-4cd5-968f-21877a7820f0-proxy-ca-bundles\") pod \"b6da6d66-adc4-4cd5-968f-21877a7820f0\" (UID: \"b6da6d66-adc4-4cd5-968f-21877a7820f0\") " Dec 12 14:15:43 crc kubenswrapper[5108]: I1212 14:15:43.182865 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b6da6d66-adc4-4cd5-968f-21877a7820f0-serving-cert\") pod \"b6da6d66-adc4-4cd5-968f-21877a7820f0\" (UID: \"b6da6d66-adc4-4cd5-968f-21877a7820f0\") " Dec 12 14:15:43 crc kubenswrapper[5108]: I1212 14:15:43.182943 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rw9gl\" (UniqueName: \"kubernetes.io/projected/b6da6d66-adc4-4cd5-968f-21877a7820f0-kube-api-access-rw9gl\") pod \"b6da6d66-adc4-4cd5-968f-21877a7820f0\" (UID: \"b6da6d66-adc4-4cd5-968f-21877a7820f0\") " Dec 12 14:15:43 crc kubenswrapper[5108]: I1212 14:15:43.184719 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b6da6d66-adc4-4cd5-968f-21877a7820f0-tmp" (OuterVolumeSpecName: "tmp") pod "b6da6d66-adc4-4cd5-968f-21877a7820f0" (UID: "b6da6d66-adc4-4cd5-968f-21877a7820f0"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:15:43 crc kubenswrapper[5108]: I1212 14:15:43.187595 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6da6d66-adc4-4cd5-968f-21877a7820f0-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "b6da6d66-adc4-4cd5-968f-21877a7820f0" (UID: "b6da6d66-adc4-4cd5-968f-21877a7820f0"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:15:43 crc kubenswrapper[5108]: I1212 14:15:43.190827 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6da6d66-adc4-4cd5-968f-21877a7820f0-config" (OuterVolumeSpecName: "config") pod "b6da6d66-adc4-4cd5-968f-21877a7820f0" (UID: "b6da6d66-adc4-4cd5-968f-21877a7820f0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:15:43 crc kubenswrapper[5108]: I1212 14:15:43.195224 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6da6d66-adc4-4cd5-968f-21877a7820f0-client-ca" (OuterVolumeSpecName: "client-ca") pod "b6da6d66-adc4-4cd5-968f-21877a7820f0" (UID: "b6da6d66-adc4-4cd5-968f-21877a7820f0"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:15:43 crc kubenswrapper[5108]: I1212 14:15:43.201704 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-wspns" Dec 12 14:15:43 crc kubenswrapper[5108]: I1212 14:15:43.214832 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6da6d66-adc4-4cd5-968f-21877a7820f0-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "b6da6d66-adc4-4cd5-968f-21877a7820f0" (UID: "b6da6d66-adc4-4cd5-968f-21877a7820f0"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:15:43 crc kubenswrapper[5108]: I1212 14:15:43.218651 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-g8w8p"] Dec 12 14:15:43 crc kubenswrapper[5108]: I1212 14:15:43.237284 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6da6d66-adc4-4cd5-968f-21877a7820f0-kube-api-access-rw9gl" (OuterVolumeSpecName: "kube-api-access-rw9gl") pod "b6da6d66-adc4-4cd5-968f-21877a7820f0" (UID: "b6da6d66-adc4-4cd5-968f-21877a7820f0"). InnerVolumeSpecName "kube-api-access-rw9gl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:15:43 crc kubenswrapper[5108]: I1212 14:15:43.263211 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-58c5495c48-bf4fj"] Dec 12 14:15:43 crc kubenswrapper[5108]: I1212 14:15:43.263787 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a0eab168-419a-4cb1-b318-244a89a1af5e" containerName="route-controller-manager" Dec 12 14:15:43 crc kubenswrapper[5108]: I1212 14:15:43.263805 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0eab168-419a-4cb1-b318-244a89a1af5e" containerName="route-controller-manager" Dec 12 14:15:43 crc kubenswrapper[5108]: I1212 14:15:43.263907 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="a0eab168-419a-4cb1-b318-244a89a1af5e" containerName="route-controller-manager" Dec 12 14:15:43 crc kubenswrapper[5108]: I1212 14:15:43.282194 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mkdxn" Dec 12 14:15:43 crc kubenswrapper[5108]: I1212 14:15:43.290486 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a0eab168-419a-4cb1-b318-244a89a1af5e-tmp\") pod \"a0eab168-419a-4cb1-b318-244a89a1af5e\" (UID: \"a0eab168-419a-4cb1-b318-244a89a1af5e\") " Dec 12 14:15:43 crc kubenswrapper[5108]: I1212 14:15:43.290557 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a0eab168-419a-4cb1-b318-244a89a1af5e-serving-cert\") pod \"a0eab168-419a-4cb1-b318-244a89a1af5e\" (UID: \"a0eab168-419a-4cb1-b318-244a89a1af5e\") " Dec 12 14:15:43 crc kubenswrapper[5108]: I1212 14:15:43.290610 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a0eab168-419a-4cb1-b318-244a89a1af5e-client-ca\") pod \"a0eab168-419a-4cb1-b318-244a89a1af5e\" (UID: \"a0eab168-419a-4cb1-b318-244a89a1af5e\") " Dec 12 14:15:43 crc kubenswrapper[5108]: I1212 14:15:43.290630 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0eab168-419a-4cb1-b318-244a89a1af5e-config\") pod \"a0eab168-419a-4cb1-b318-244a89a1af5e\" (UID: \"a0eab168-419a-4cb1-b318-244a89a1af5e\") " Dec 12 14:15:43 crc kubenswrapper[5108]: I1212 14:15:43.290652 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7f865\" (UniqueName: \"kubernetes.io/projected/a0eab168-419a-4cb1-b318-244a89a1af5e-kube-api-access-7f865\") pod \"a0eab168-419a-4cb1-b318-244a89a1af5e\" (UID: \"a0eab168-419a-4cb1-b318-244a89a1af5e\") " Dec 12 14:15:43 crc kubenswrapper[5108]: I1212 14:15:43.290806 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f8e139ce-63ec-4858-8765-013c898fc41a-tmp\") pod \"controller-manager-8677dbb44d-xbgwl\" (UID: \"f8e139ce-63ec-4858-8765-013c898fc41a\") " pod="openshift-controller-manager/controller-manager-8677dbb44d-xbgwl" Dec 12 14:15:43 crc kubenswrapper[5108]: I1212 14:15:43.290835 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhs6s\" (UniqueName: \"kubernetes.io/projected/f8e139ce-63ec-4858-8765-013c898fc41a-kube-api-access-nhs6s\") pod \"controller-manager-8677dbb44d-xbgwl\" (UID: \"f8e139ce-63ec-4858-8765-013c898fc41a\") " pod="openshift-controller-manager/controller-manager-8677dbb44d-xbgwl" Dec 12 14:15:43 crc kubenswrapper[5108]: I1212 14:15:43.290878 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f8e139ce-63ec-4858-8765-013c898fc41a-serving-cert\") pod \"controller-manager-8677dbb44d-xbgwl\" (UID: \"f8e139ce-63ec-4858-8765-013c898fc41a\") " pod="openshift-controller-manager/controller-manager-8677dbb44d-xbgwl" Dec 12 14:15:43 crc kubenswrapper[5108]: I1212 14:15:43.290912 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f8e139ce-63ec-4858-8765-013c898fc41a-proxy-ca-bundles\") pod \"controller-manager-8677dbb44d-xbgwl\" (UID: \"f8e139ce-63ec-4858-8765-013c898fc41a\") " pod="openshift-controller-manager/controller-manager-8677dbb44d-xbgwl" Dec 12 14:15:43 crc kubenswrapper[5108]: I1212 14:15:43.290942 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f8e139ce-63ec-4858-8765-013c898fc41a-config\") pod \"controller-manager-8677dbb44d-xbgwl\" (UID: \"f8e139ce-63ec-4858-8765-013c898fc41a\") " pod="openshift-controller-manager/controller-manager-8677dbb44d-xbgwl" Dec 12 14:15:43 crc kubenswrapper[5108]: I1212 14:15:43.290996 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f8e139ce-63ec-4858-8765-013c898fc41a-client-ca\") pod \"controller-manager-8677dbb44d-xbgwl\" (UID: \"f8e139ce-63ec-4858-8765-013c898fc41a\") " pod="openshift-controller-manager/controller-manager-8677dbb44d-xbgwl" Dec 12 14:15:43 crc kubenswrapper[5108]: I1212 14:15:43.291042 5108 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b6da6d66-adc4-4cd5-968f-21877a7820f0-client-ca\") on node \"crc\" DevicePath \"\"" Dec 12 14:15:43 crc kubenswrapper[5108]: I1212 14:15:43.291056 5108 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b6da6d66-adc4-4cd5-968f-21877a7820f0-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 12 14:15:43 crc kubenswrapper[5108]: I1212 14:15:43.291067 5108 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b6da6d66-adc4-4cd5-968f-21877a7820f0-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 14:15:43 crc kubenswrapper[5108]: I1212 14:15:43.291095 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rw9gl\" (UniqueName: \"kubernetes.io/projected/b6da6d66-adc4-4cd5-968f-21877a7820f0-kube-api-access-rw9gl\") on node \"crc\" DevicePath \"\"" Dec 12 14:15:43 crc kubenswrapper[5108]: I1212 14:15:43.291107 5108 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6da6d66-adc4-4cd5-968f-21877a7820f0-config\") on node \"crc\" DevicePath \"\"" Dec 12 14:15:43 crc kubenswrapper[5108]: I1212 14:15:43.291115 5108 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b6da6d66-adc4-4cd5-968f-21877a7820f0-tmp\") on node \"crc\" DevicePath \"\"" Dec 12 14:15:43 crc kubenswrapper[5108]: I1212 14:15:43.291675 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a0eab168-419a-4cb1-b318-244a89a1af5e-tmp" (OuterVolumeSpecName: "tmp") pod "a0eab168-419a-4cb1-b318-244a89a1af5e" (UID: "a0eab168-419a-4cb1-b318-244a89a1af5e"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:15:43 crc kubenswrapper[5108]: I1212 14:15:43.296709 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0eab168-419a-4cb1-b318-244a89a1af5e-config" (OuterVolumeSpecName: "config") pod "a0eab168-419a-4cb1-b318-244a89a1af5e" (UID: "a0eab168-419a-4cb1-b318-244a89a1af5e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:15:43 crc kubenswrapper[5108]: I1212 14:15:43.297537 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-58c5495c48-bf4fj" Dec 12 14:15:43 crc kubenswrapper[5108]: I1212 14:15:43.313197 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0eab168-419a-4cb1-b318-244a89a1af5e-kube-api-access-7f865" (OuterVolumeSpecName: "kube-api-access-7f865") pod "a0eab168-419a-4cb1-b318-244a89a1af5e" (UID: "a0eab168-419a-4cb1-b318-244a89a1af5e"). InnerVolumeSpecName "kube-api-access-7f865". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:15:43 crc kubenswrapper[5108]: I1212 14:15:43.313313 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0eab168-419a-4cb1-b318-244a89a1af5e-client-ca" (OuterVolumeSpecName: "client-ca") pod "a0eab168-419a-4cb1-b318-244a89a1af5e" (UID: "a0eab168-419a-4cb1-b318-244a89a1af5e"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:15:43 crc kubenswrapper[5108]: I1212 14:15:43.317309 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-58c5495c48-bf4fj"] Dec 12 14:15:43 crc kubenswrapper[5108]: I1212 14:15:43.320339 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0eab168-419a-4cb1-b318-244a89a1af5e-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a0eab168-419a-4cb1-b318-244a89a1af5e" (UID: "a0eab168-419a-4cb1-b318-244a89a1af5e"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:15:43 crc kubenswrapper[5108]: I1212 14:15:43.392145 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/265c6a3f-9b00-4500-933f-9fac4ebddc69-config\") pod \"route-controller-manager-58c5495c48-bf4fj\" (UID: \"265c6a3f-9b00-4500-933f-9fac4ebddc69\") " pod="openshift-route-controller-manager/route-controller-manager-58c5495c48-bf4fj" Dec 12 14:15:43 crc kubenswrapper[5108]: I1212 14:15:43.392459 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f8e139ce-63ec-4858-8765-013c898fc41a-serving-cert\") pod \"controller-manager-8677dbb44d-xbgwl\" (UID: \"f8e139ce-63ec-4858-8765-013c898fc41a\") " pod="openshift-controller-manager/controller-manager-8677dbb44d-xbgwl" Dec 12 14:15:43 crc kubenswrapper[5108]: I1212 14:15:43.392487 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f8e139ce-63ec-4858-8765-013c898fc41a-proxy-ca-bundles\") pod \"controller-manager-8677dbb44d-xbgwl\" (UID: \"f8e139ce-63ec-4858-8765-013c898fc41a\") " pod="openshift-controller-manager/controller-manager-8677dbb44d-xbgwl" Dec 12 14:15:43 crc kubenswrapper[5108]: I1212 14:15:43.392514 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f8e139ce-63ec-4858-8765-013c898fc41a-config\") pod \"controller-manager-8677dbb44d-xbgwl\" (UID: \"f8e139ce-63ec-4858-8765-013c898fc41a\") " pod="openshift-controller-manager/controller-manager-8677dbb44d-xbgwl" Dec 12 14:15:43 crc kubenswrapper[5108]: I1212 14:15:43.392548 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/265c6a3f-9b00-4500-933f-9fac4ebddc69-client-ca\") pod \"route-controller-manager-58c5495c48-bf4fj\" (UID: \"265c6a3f-9b00-4500-933f-9fac4ebddc69\") " pod="openshift-route-controller-manager/route-controller-manager-58c5495c48-bf4fj" Dec 12 14:15:43 crc kubenswrapper[5108]: I1212 14:15:43.392565 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/265c6a3f-9b00-4500-933f-9fac4ebddc69-tmp\") pod \"route-controller-manager-58c5495c48-bf4fj\" (UID: \"265c6a3f-9b00-4500-933f-9fac4ebddc69\") " pod="openshift-route-controller-manager/route-controller-manager-58c5495c48-bf4fj" Dec 12 14:15:43 crc kubenswrapper[5108]: I1212 14:15:43.392584 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjhx2\" (UniqueName: \"kubernetes.io/projected/265c6a3f-9b00-4500-933f-9fac4ebddc69-kube-api-access-mjhx2\") pod \"route-controller-manager-58c5495c48-bf4fj\" (UID: \"265c6a3f-9b00-4500-933f-9fac4ebddc69\") " pod="openshift-route-controller-manager/route-controller-manager-58c5495c48-bf4fj" Dec 12 14:15:43 crc kubenswrapper[5108]: I1212 14:15:43.392613 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f8e139ce-63ec-4858-8765-013c898fc41a-client-ca\") pod \"controller-manager-8677dbb44d-xbgwl\" (UID: \"f8e139ce-63ec-4858-8765-013c898fc41a\") " pod="openshift-controller-manager/controller-manager-8677dbb44d-xbgwl" Dec 12 14:15:43 crc kubenswrapper[5108]: I1212 14:15:43.392631 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/265c6a3f-9b00-4500-933f-9fac4ebddc69-serving-cert\") pod \"route-controller-manager-58c5495c48-bf4fj\" (UID: \"265c6a3f-9b00-4500-933f-9fac4ebddc69\") " pod="openshift-route-controller-manager/route-controller-manager-58c5495c48-bf4fj" Dec 12 14:15:43 crc kubenswrapper[5108]: I1212 14:15:43.392658 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f8e139ce-63ec-4858-8765-013c898fc41a-tmp\") pod \"controller-manager-8677dbb44d-xbgwl\" (UID: \"f8e139ce-63ec-4858-8765-013c898fc41a\") " pod="openshift-controller-manager/controller-manager-8677dbb44d-xbgwl" Dec 12 14:15:43 crc kubenswrapper[5108]: I1212 14:15:43.392680 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-nhs6s\" (UniqueName: \"kubernetes.io/projected/f8e139ce-63ec-4858-8765-013c898fc41a-kube-api-access-nhs6s\") pod \"controller-manager-8677dbb44d-xbgwl\" (UID: \"f8e139ce-63ec-4858-8765-013c898fc41a\") " pod="openshift-controller-manager/controller-manager-8677dbb44d-xbgwl" Dec 12 14:15:43 crc kubenswrapper[5108]: I1212 14:15:43.392714 5108 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a0eab168-419a-4cb1-b318-244a89a1af5e-tmp\") on node \"crc\" DevicePath \"\"" Dec 12 14:15:43 crc kubenswrapper[5108]: I1212 14:15:43.392725 5108 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a0eab168-419a-4cb1-b318-244a89a1af5e-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 14:15:43 crc kubenswrapper[5108]: I1212 14:15:43.392735 5108 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a0eab168-419a-4cb1-b318-244a89a1af5e-client-ca\") on node \"crc\" DevicePath \"\"" Dec 12 14:15:43 crc kubenswrapper[5108]: I1212 14:15:43.392743 5108 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0eab168-419a-4cb1-b318-244a89a1af5e-config\") on node \"crc\" DevicePath \"\"" Dec 12 14:15:43 crc kubenswrapper[5108]: I1212 14:15:43.392751 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7f865\" (UniqueName: \"kubernetes.io/projected/a0eab168-419a-4cb1-b318-244a89a1af5e-kube-api-access-7f865\") on node \"crc\" DevicePath \"\"" Dec 12 14:15:43 crc kubenswrapper[5108]: I1212 14:15:43.394022 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f8e139ce-63ec-4858-8765-013c898fc41a-tmp\") pod \"controller-manager-8677dbb44d-xbgwl\" (UID: \"f8e139ce-63ec-4858-8765-013c898fc41a\") " pod="openshift-controller-manager/controller-manager-8677dbb44d-xbgwl" Dec 12 14:15:43 crc kubenswrapper[5108]: I1212 14:15:43.394210 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f8e139ce-63ec-4858-8765-013c898fc41a-client-ca\") pod \"controller-manager-8677dbb44d-xbgwl\" (UID: \"f8e139ce-63ec-4858-8765-013c898fc41a\") " pod="openshift-controller-manager/controller-manager-8677dbb44d-xbgwl" Dec 12 14:15:43 crc kubenswrapper[5108]: I1212 14:15:43.394351 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f8e139ce-63ec-4858-8765-013c898fc41a-proxy-ca-bundles\") pod \"controller-manager-8677dbb44d-xbgwl\" (UID: \"f8e139ce-63ec-4858-8765-013c898fc41a\") " pod="openshift-controller-manager/controller-manager-8677dbb44d-xbgwl" Dec 12 14:15:43 crc kubenswrapper[5108]: I1212 14:15:43.395013 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f8e139ce-63ec-4858-8765-013c898fc41a-config\") pod \"controller-manager-8677dbb44d-xbgwl\" (UID: \"f8e139ce-63ec-4858-8765-013c898fc41a\") " pod="openshift-controller-manager/controller-manager-8677dbb44d-xbgwl" Dec 12 14:15:43 crc kubenswrapper[5108]: I1212 14:15:43.404138 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f8e139ce-63ec-4858-8765-013c898fc41a-serving-cert\") pod \"controller-manager-8677dbb44d-xbgwl\" (UID: \"f8e139ce-63ec-4858-8765-013c898fc41a\") " pod="openshift-controller-manager/controller-manager-8677dbb44d-xbgwl" Dec 12 14:15:43 crc kubenswrapper[5108]: I1212 14:15:43.414130 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhs6s\" (UniqueName: \"kubernetes.io/projected/f8e139ce-63ec-4858-8765-013c898fc41a-kube-api-access-nhs6s\") pod \"controller-manager-8677dbb44d-xbgwl\" (UID: \"f8e139ce-63ec-4858-8765-013c898fc41a\") " pod="openshift-controller-manager/controller-manager-8677dbb44d-xbgwl" Dec 12 14:15:43 crc kubenswrapper[5108]: I1212 14:15:43.465526 5108 generic.go:358] "Generic (PLEG): container finished" podID="b6da6d66-adc4-4cd5-968f-21877a7820f0" containerID="aecaeaac68eceea28f08563626f34d7d3f3563a3d970846498763e19a38337c2" exitCode=0 Dec 12 14:15:43 crc kubenswrapper[5108]: I1212 14:15:43.465625 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-tx2lf" event={"ID":"b6da6d66-adc4-4cd5-968f-21877a7820f0","Type":"ContainerDied","Data":"aecaeaac68eceea28f08563626f34d7d3f3563a3d970846498763e19a38337c2"} Dec 12 14:15:43 crc kubenswrapper[5108]: I1212 14:15:43.465691 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-tx2lf" event={"ID":"b6da6d66-adc4-4cd5-968f-21877a7820f0","Type":"ContainerDied","Data":"3ce1c24ea79c387fdd62c92220e54975a547982573e90c38b74079d8617351b5"} Dec 12 14:15:43 crc kubenswrapper[5108]: I1212 14:15:43.465710 5108 scope.go:117] "RemoveContainer" containerID="aecaeaac68eceea28f08563626f34d7d3f3563a3d970846498763e19a38337c2" Dec 12 14:15:43 crc kubenswrapper[5108]: I1212 14:15:43.465825 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-tx2lf" Dec 12 14:15:43 crc kubenswrapper[5108]: I1212 14:15:43.467862 5108 generic.go:358] "Generic (PLEG): container finished" podID="a0eab168-419a-4cb1-b318-244a89a1af5e" containerID="f206e2a1c591e80d92d202ba0f98e77da61db894efa3e1bc4ee38378594a8c11" exitCode=0 Dec 12 14:15:43 crc kubenswrapper[5108]: I1212 14:15:43.468908 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-wspns" Dec 12 14:15:43 crc kubenswrapper[5108]: I1212 14:15:43.469811 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-wspns" event={"ID":"a0eab168-419a-4cb1-b318-244a89a1af5e","Type":"ContainerDied","Data":"f206e2a1c591e80d92d202ba0f98e77da61db894efa3e1bc4ee38378594a8c11"} Dec 12 14:15:43 crc kubenswrapper[5108]: I1212 14:15:43.469863 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-wspns" event={"ID":"a0eab168-419a-4cb1-b318-244a89a1af5e","Type":"ContainerDied","Data":"1c737981c381cebd8104199914fd1a4b06af585584456fca65311e19325d03b5"} Dec 12 14:15:43 crc kubenswrapper[5108]: I1212 14:15:43.473484 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-g8w8p" event={"ID":"5be79318-038a-4e92-8837-dc46ede3d903","Type":"ContainerStarted","Data":"3155126c4822f9df6cddd1721ab4b2978c8090bb9e766e6ade2d6804ad2bcb07"} Dec 12 14:15:43 crc kubenswrapper[5108]: I1212 14:15:43.473521 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-g8w8p" event={"ID":"5be79318-038a-4e92-8837-dc46ede3d903","Type":"ContainerStarted","Data":"101f6c9ed0b3bb8da640ca463e1821334de9c6d50bae0cef70dd8a24bab46e35"} Dec 12 14:15:43 crc kubenswrapper[5108]: I1212 14:15:43.473760 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-g8w8p" Dec 12 14:15:43 crc kubenswrapper[5108]: I1212 14:15:43.476876 5108 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-g8w8p container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.60:8080/healthz\": dial tcp 10.217.0.60:8080: connect: connection refused" start-of-body= Dec 12 14:15:43 crc kubenswrapper[5108]: I1212 14:15:43.476917 5108 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-547dbd544d-g8w8p" podUID="5be79318-038a-4e92-8837-dc46ede3d903" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.60:8080/healthz\": dial tcp 10.217.0.60:8080: connect: connection refused" Dec 12 14:15:43 crc kubenswrapper[5108]: I1212 14:15:43.477377 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8677dbb44d-xbgwl" Dec 12 14:15:43 crc kubenswrapper[5108]: I1212 14:15:43.497754 5108 scope.go:117] "RemoveContainer" containerID="aecaeaac68eceea28f08563626f34d7d3f3563a3d970846498763e19a38337c2" Dec 12 14:15:43 crc kubenswrapper[5108]: I1212 14:15:43.498146 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/265c6a3f-9b00-4500-933f-9fac4ebddc69-client-ca\") pod \"route-controller-manager-58c5495c48-bf4fj\" (UID: \"265c6a3f-9b00-4500-933f-9fac4ebddc69\") " pod="openshift-route-controller-manager/route-controller-manager-58c5495c48-bf4fj" Dec 12 14:15:43 crc kubenswrapper[5108]: I1212 14:15:43.498186 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/265c6a3f-9b00-4500-933f-9fac4ebddc69-tmp\") pod \"route-controller-manager-58c5495c48-bf4fj\" (UID: \"265c6a3f-9b00-4500-933f-9fac4ebddc69\") " pod="openshift-route-controller-manager/route-controller-manager-58c5495c48-bf4fj" Dec 12 14:15:43 crc kubenswrapper[5108]: I1212 14:15:43.498209 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mjhx2\" (UniqueName: \"kubernetes.io/projected/265c6a3f-9b00-4500-933f-9fac4ebddc69-kube-api-access-mjhx2\") pod \"route-controller-manager-58c5495c48-bf4fj\" (UID: \"265c6a3f-9b00-4500-933f-9fac4ebddc69\") " pod="openshift-route-controller-manager/route-controller-manager-58c5495c48-bf4fj" Dec 12 14:15:43 crc kubenswrapper[5108]: I1212 14:15:43.498240 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/265c6a3f-9b00-4500-933f-9fac4ebddc69-serving-cert\") pod \"route-controller-manager-58c5495c48-bf4fj\" (UID: \"265c6a3f-9b00-4500-933f-9fac4ebddc69\") " pod="openshift-route-controller-manager/route-controller-manager-58c5495c48-bf4fj" Dec 12 14:15:43 crc kubenswrapper[5108]: I1212 14:15:43.498277 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/265c6a3f-9b00-4500-933f-9fac4ebddc69-config\") pod \"route-controller-manager-58c5495c48-bf4fj\" (UID: \"265c6a3f-9b00-4500-933f-9fac4ebddc69\") " pod="openshift-route-controller-manager/route-controller-manager-58c5495c48-bf4fj" Dec 12 14:15:43 crc kubenswrapper[5108]: E1212 14:15:43.498372 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aecaeaac68eceea28f08563626f34d7d3f3563a3d970846498763e19a38337c2\": container with ID starting with aecaeaac68eceea28f08563626f34d7d3f3563a3d970846498763e19a38337c2 not found: ID does not exist" containerID="aecaeaac68eceea28f08563626f34d7d3f3563a3d970846498763e19a38337c2" Dec 12 14:15:43 crc kubenswrapper[5108]: I1212 14:15:43.498397 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aecaeaac68eceea28f08563626f34d7d3f3563a3d970846498763e19a38337c2"} err="failed to get container status \"aecaeaac68eceea28f08563626f34d7d3f3563a3d970846498763e19a38337c2\": rpc error: code = NotFound desc = could not find container \"aecaeaac68eceea28f08563626f34d7d3f3563a3d970846498763e19a38337c2\": container with ID starting with aecaeaac68eceea28f08563626f34d7d3f3563a3d970846498763e19a38337c2 not found: ID does not exist" Dec 12 14:15:43 crc kubenswrapper[5108]: I1212 14:15:43.498417 5108 scope.go:117] "RemoveContainer" containerID="f206e2a1c591e80d92d202ba0f98e77da61db894efa3e1bc4ee38378594a8c11" Dec 12 14:15:43 crc kubenswrapper[5108]: I1212 14:15:43.499004 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-547dbd544d-g8w8p" podStartSLOduration=1.49898592 podStartE2EDuration="1.49898592s" podCreationTimestamp="2025-12-12 14:15:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 14:15:43.498064235 +0000 UTC m=+296.406055404" watchObservedRunningTime="2025-12-12 14:15:43.49898592 +0000 UTC m=+296.406977079" Dec 12 14:15:43 crc kubenswrapper[5108]: I1212 14:15:43.499550 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/265c6a3f-9b00-4500-933f-9fac4ebddc69-tmp\") pod \"route-controller-manager-58c5495c48-bf4fj\" (UID: \"265c6a3f-9b00-4500-933f-9fac4ebddc69\") " pod="openshift-route-controller-manager/route-controller-manager-58c5495c48-bf4fj" Dec 12 14:15:43 crc kubenswrapper[5108]: I1212 14:15:43.499963 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/265c6a3f-9b00-4500-933f-9fac4ebddc69-client-ca\") pod \"route-controller-manager-58c5495c48-bf4fj\" (UID: \"265c6a3f-9b00-4500-933f-9fac4ebddc69\") " pod="openshift-route-controller-manager/route-controller-manager-58c5495c48-bf4fj" Dec 12 14:15:43 crc kubenswrapper[5108]: I1212 14:15:43.503589 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/265c6a3f-9b00-4500-933f-9fac4ebddc69-config\") pod \"route-controller-manager-58c5495c48-bf4fj\" (UID: \"265c6a3f-9b00-4500-933f-9fac4ebddc69\") " pod="openshift-route-controller-manager/route-controller-manager-58c5495c48-bf4fj" Dec 12 14:15:43 crc kubenswrapper[5108]: I1212 14:15:43.505822 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/265c6a3f-9b00-4500-933f-9fac4ebddc69-serving-cert\") pod \"route-controller-manager-58c5495c48-bf4fj\" (UID: \"265c6a3f-9b00-4500-933f-9fac4ebddc69\") " pod="openshift-route-controller-manager/route-controller-manager-58c5495c48-bf4fj" Dec 12 14:15:43 crc kubenswrapper[5108]: I1212 14:15:43.520196 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjhx2\" (UniqueName: \"kubernetes.io/projected/265c6a3f-9b00-4500-933f-9fac4ebddc69-kube-api-access-mjhx2\") pod \"route-controller-manager-58c5495c48-bf4fj\" (UID: \"265c6a3f-9b00-4500-933f-9fac4ebddc69\") " pod="openshift-route-controller-manager/route-controller-manager-58c5495c48-bf4fj" Dec 12 14:15:43 crc kubenswrapper[5108]: I1212 14:15:43.533871 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-tx2lf"] Dec 12 14:15:43 crc kubenswrapper[5108]: I1212 14:15:43.533933 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-tx2lf"] Dec 12 14:15:43 crc kubenswrapper[5108]: I1212 14:15:43.538697 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-wspns"] Dec 12 14:15:43 crc kubenswrapper[5108]: I1212 14:15:43.541039 5108 scope.go:117] "RemoveContainer" containerID="f206e2a1c591e80d92d202ba0f98e77da61db894efa3e1bc4ee38378594a8c11" Dec 12 14:15:43 crc kubenswrapper[5108]: E1212 14:15:43.542062 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f206e2a1c591e80d92d202ba0f98e77da61db894efa3e1bc4ee38378594a8c11\": container with ID starting with f206e2a1c591e80d92d202ba0f98e77da61db894efa3e1bc4ee38378594a8c11 not found: ID does not exist" containerID="f206e2a1c591e80d92d202ba0f98e77da61db894efa3e1bc4ee38378594a8c11" Dec 12 14:15:43 crc kubenswrapper[5108]: I1212 14:15:43.542116 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f206e2a1c591e80d92d202ba0f98e77da61db894efa3e1bc4ee38378594a8c11"} err="failed to get container status \"f206e2a1c591e80d92d202ba0f98e77da61db894efa3e1bc4ee38378594a8c11\": rpc error: code = NotFound desc = could not find container \"f206e2a1c591e80d92d202ba0f98e77da61db894efa3e1bc4ee38378594a8c11\": container with ID starting with f206e2a1c591e80d92d202ba0f98e77da61db894efa3e1bc4ee38378594a8c11 not found: ID does not exist" Dec 12 14:15:43 crc kubenswrapper[5108]: I1212 14:15:43.546543 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-wspns"] Dec 12 14:15:43 crc kubenswrapper[5108]: I1212 14:15:43.631451 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-58c5495c48-bf4fj" Dec 12 14:15:43 crc kubenswrapper[5108]: I1212 14:15:43.651104 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29425815-zsd42"] Dec 12 14:15:43 crc kubenswrapper[5108]: W1212 14:15:43.664172 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod16c0779f_318f_4ca3_ae9f_6a4954d8d814.slice/crio-f936ab247981df26ad168adb81fc02e6386dd7973cc6ff5c3002d4851efd38a1 WatchSource:0}: Error finding container f936ab247981df26ad168adb81fc02e6386dd7973cc6ff5c3002d4851efd38a1: Status 404 returned error can't find the container with id f936ab247981df26ad168adb81fc02e6386dd7973cc6ff5c3002d4851efd38a1 Dec 12 14:15:43 crc kubenswrapper[5108]: I1212 14:15:43.760333 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-8677dbb44d-xbgwl"] Dec 12 14:15:43 crc kubenswrapper[5108]: I1212 14:15:43.829448 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mkdxn"] Dec 12 14:15:44 crc kubenswrapper[5108]: I1212 14:15:44.086998 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-6tf52"] Dec 12 14:15:44 crc kubenswrapper[5108]: I1212 14:15:44.095909 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6tf52" Dec 12 14:15:44 crc kubenswrapper[5108]: I1212 14:15:44.100262 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Dec 12 14:15:44 crc kubenswrapper[5108]: I1212 14:15:44.113480 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6tf52"] Dec 12 14:15:44 crc kubenswrapper[5108]: I1212 14:15:44.184260 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-58c5495c48-bf4fj"] Dec 12 14:15:44 crc kubenswrapper[5108]: W1212 14:15:44.205342 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod265c6a3f_9b00_4500_933f_9fac4ebddc69.slice/crio-7f5c7142ed44bc24dec3414c567003cbdcc1390e4307933d4502466665e9e4df WatchSource:0}: Error finding container 7f5c7142ed44bc24dec3414c567003cbdcc1390e4307933d4502466665e9e4df: Status 404 returned error can't find the container with id 7f5c7142ed44bc24dec3414c567003cbdcc1390e4307933d4502466665e9e4df Dec 12 14:15:44 crc kubenswrapper[5108]: I1212 14:15:44.212191 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37cda489-68c9-43d3-bc72-56593f33ca6f-catalog-content\") pod \"redhat-operators-6tf52\" (UID: \"37cda489-68c9-43d3-bc72-56593f33ca6f\") " pod="openshift-marketplace/redhat-operators-6tf52" Dec 12 14:15:44 crc kubenswrapper[5108]: I1212 14:15:44.212294 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdg2p\" (UniqueName: \"kubernetes.io/projected/37cda489-68c9-43d3-bc72-56593f33ca6f-kube-api-access-sdg2p\") pod \"redhat-operators-6tf52\" (UID: \"37cda489-68c9-43d3-bc72-56593f33ca6f\") " pod="openshift-marketplace/redhat-operators-6tf52" Dec 12 14:15:44 crc kubenswrapper[5108]: I1212 14:15:44.212332 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37cda489-68c9-43d3-bc72-56593f33ca6f-utilities\") pod \"redhat-operators-6tf52\" (UID: \"37cda489-68c9-43d3-bc72-56593f33ca6f\") " pod="openshift-marketplace/redhat-operators-6tf52" Dec 12 14:15:44 crc kubenswrapper[5108]: I1212 14:15:44.313637 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-sdg2p\" (UniqueName: \"kubernetes.io/projected/37cda489-68c9-43d3-bc72-56593f33ca6f-kube-api-access-sdg2p\") pod \"redhat-operators-6tf52\" (UID: \"37cda489-68c9-43d3-bc72-56593f33ca6f\") " pod="openshift-marketplace/redhat-operators-6tf52" Dec 12 14:15:44 crc kubenswrapper[5108]: I1212 14:15:44.313690 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37cda489-68c9-43d3-bc72-56593f33ca6f-utilities\") pod \"redhat-operators-6tf52\" (UID: \"37cda489-68c9-43d3-bc72-56593f33ca6f\") " pod="openshift-marketplace/redhat-operators-6tf52" Dec 12 14:15:44 crc kubenswrapper[5108]: I1212 14:15:44.313727 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37cda489-68c9-43d3-bc72-56593f33ca6f-catalog-content\") pod \"redhat-operators-6tf52\" (UID: \"37cda489-68c9-43d3-bc72-56593f33ca6f\") " pod="openshift-marketplace/redhat-operators-6tf52" Dec 12 14:15:44 crc kubenswrapper[5108]: I1212 14:15:44.314202 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37cda489-68c9-43d3-bc72-56593f33ca6f-catalog-content\") pod \"redhat-operators-6tf52\" (UID: \"37cda489-68c9-43d3-bc72-56593f33ca6f\") " pod="openshift-marketplace/redhat-operators-6tf52" Dec 12 14:15:44 crc kubenswrapper[5108]: I1212 14:15:44.314287 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37cda489-68c9-43d3-bc72-56593f33ca6f-utilities\") pod \"redhat-operators-6tf52\" (UID: \"37cda489-68c9-43d3-bc72-56593f33ca6f\") " pod="openshift-marketplace/redhat-operators-6tf52" Dec 12 14:15:44 crc kubenswrapper[5108]: I1212 14:15:44.333640 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-sdg2p\" (UniqueName: \"kubernetes.io/projected/37cda489-68c9-43d3-bc72-56593f33ca6f-kube-api-access-sdg2p\") pod \"redhat-operators-6tf52\" (UID: \"37cda489-68c9-43d3-bc72-56593f33ca6f\") " pod="openshift-marketplace/redhat-operators-6tf52" Dec 12 14:15:44 crc kubenswrapper[5108]: I1212 14:15:44.455819 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6tf52" Dec 12 14:15:44 crc kubenswrapper[5108]: I1212 14:15:44.482232 5108 generic.go:358] "Generic (PLEG): container finished" podID="16c0779f-318f-4ca3-ae9f-6a4954d8d814" containerID="83cf4b1e24b3b0c488e364755af91d88c3c36ff13c866870870c0972b06339c3" exitCode=0 Dec 12 14:15:44 crc kubenswrapper[5108]: I1212 14:15:44.482345 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29425815-zsd42" event={"ID":"16c0779f-318f-4ca3-ae9f-6a4954d8d814","Type":"ContainerDied","Data":"83cf4b1e24b3b0c488e364755af91d88c3c36ff13c866870870c0972b06339c3"} Dec 12 14:15:44 crc kubenswrapper[5108]: I1212 14:15:44.482377 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29425815-zsd42" event={"ID":"16c0779f-318f-4ca3-ae9f-6a4954d8d814","Type":"ContainerStarted","Data":"f936ab247981df26ad168adb81fc02e6386dd7973cc6ff5c3002d4851efd38a1"} Dec 12 14:15:44 crc kubenswrapper[5108]: I1212 14:15:44.485608 5108 generic.go:358] "Generic (PLEG): container finished" podID="d71f524c-38b4-4eca-b9d2-a0dc97e4ef02" containerID="51d6291318f7f0de0d01703032c409d82606998855751728449cbec1267691a6" exitCode=0 Dec 12 14:15:44 crc kubenswrapper[5108]: I1212 14:15:44.485717 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k2hqp" event={"ID":"d71f524c-38b4-4eca-b9d2-a0dc97e4ef02","Type":"ContainerDied","Data":"51d6291318f7f0de0d01703032c409d82606998855751728449cbec1267691a6"} Dec 12 14:15:44 crc kubenswrapper[5108]: I1212 14:15:44.488029 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-58c5495c48-bf4fj" event={"ID":"265c6a3f-9b00-4500-933f-9fac4ebddc69","Type":"ContainerStarted","Data":"219d5a642e2f2c6a9a34f0b44eaa7e6c2e6b9373565e3b7a8e16c7e33f21e3be"} Dec 12 14:15:44 crc kubenswrapper[5108]: I1212 14:15:44.488067 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-58c5495c48-bf4fj" event={"ID":"265c6a3f-9b00-4500-933f-9fac4ebddc69","Type":"ContainerStarted","Data":"7f5c7142ed44bc24dec3414c567003cbdcc1390e4307933d4502466665e9e4df"} Dec 12 14:15:44 crc kubenswrapper[5108]: I1212 14:15:44.488231 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-58c5495c48-bf4fj" Dec 12 14:15:44 crc kubenswrapper[5108]: I1212 14:15:44.500861 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-8677dbb44d-xbgwl" event={"ID":"f8e139ce-63ec-4858-8765-013c898fc41a","Type":"ContainerStarted","Data":"d60cc0c0a2bc5478b18e977ef788bc34c9f7a287a6856d3b66c9858543bba021"} Dec 12 14:15:44 crc kubenswrapper[5108]: I1212 14:15:44.500901 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-8677dbb44d-xbgwl" event={"ID":"f8e139ce-63ec-4858-8765-013c898fc41a","Type":"ContainerStarted","Data":"f032e8950b74cf24d7930d21fa0789f4410f1ccf13537cc4b743cb96640d58d6"} Dec 12 14:15:44 crc kubenswrapper[5108]: I1212 14:15:44.501975 5108 generic.go:358] "Generic (PLEG): container finished" podID="ba7bd770-9185-40a3-ae63-961ee83bd38e" containerID="fbc9b40f785f101174e0264beb3fd50817068750fa28f471d00b36b12177496b" exitCode=0 Dec 12 14:15:44 crc kubenswrapper[5108]: I1212 14:15:44.502836 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mkdxn" event={"ID":"ba7bd770-9185-40a3-ae63-961ee83bd38e","Type":"ContainerDied","Data":"fbc9b40f785f101174e0264beb3fd50817068750fa28f471d00b36b12177496b"} Dec 12 14:15:44 crc kubenswrapper[5108]: I1212 14:15:44.502860 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mkdxn" event={"ID":"ba7bd770-9185-40a3-ae63-961ee83bd38e","Type":"ContainerStarted","Data":"2cdfa6f370422d34de92305a518718bc8ea1ecf69501e2b93a501bf93e7b0fc5"} Dec 12 14:15:44 crc kubenswrapper[5108]: I1212 14:15:44.503502 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-8677dbb44d-xbgwl" Dec 12 14:15:44 crc kubenswrapper[5108]: I1212 14:15:44.514553 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-58c5495c48-bf4fj" podStartSLOduration=2.514529803 podStartE2EDuration="2.514529803s" podCreationTimestamp="2025-12-12 14:15:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 14:15:44.513114644 +0000 UTC m=+297.421105823" watchObservedRunningTime="2025-12-12 14:15:44.514529803 +0000 UTC m=+297.422520952" Dec 12 14:15:44 crc kubenswrapper[5108]: I1212 14:15:44.514697 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-g8w8p" Dec 12 14:15:44 crc kubenswrapper[5108]: I1212 14:15:44.586898 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-8677dbb44d-xbgwl" podStartSLOduration=2.58687947 podStartE2EDuration="2.58687947s" podCreationTimestamp="2025-12-12 14:15:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 14:15:44.582043688 +0000 UTC m=+297.490034847" watchObservedRunningTime="2025-12-12 14:15:44.58687947 +0000 UTC m=+297.494870619" Dec 12 14:15:44 crc kubenswrapper[5108]: I1212 14:15:44.675484 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-8677dbb44d-xbgwl" Dec 12 14:15:44 crc kubenswrapper[5108]: I1212 14:15:44.968189 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6tf52"] Dec 12 14:15:44 crc kubenswrapper[5108]: W1212 14:15:44.972701 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37cda489_68c9_43d3_bc72_56593f33ca6f.slice/crio-75213e781c74cbecf721de6686ac636e343cbb95cf540ad4516c8381493d08af WatchSource:0}: Error finding container 75213e781c74cbecf721de6686ac636e343cbb95cf540ad4516c8381493d08af: Status 404 returned error can't find the container with id 75213e781c74cbecf721de6686ac636e343cbb95cf540ad4516c8381493d08af Dec 12 14:15:45 crc kubenswrapper[5108]: I1212 14:15:45.222799 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-58c5495c48-bf4fj" Dec 12 14:15:45 crc kubenswrapper[5108]: I1212 14:15:45.422180 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0eab168-419a-4cb1-b318-244a89a1af5e" path="/var/lib/kubelet/pods/a0eab168-419a-4cb1-b318-244a89a1af5e/volumes" Dec 12 14:15:45 crc kubenswrapper[5108]: I1212 14:15:45.423669 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6da6d66-adc4-4cd5-968f-21877a7820f0" path="/var/lib/kubelet/pods/b6da6d66-adc4-4cd5-968f-21877a7820f0/volumes" Dec 12 14:15:45 crc kubenswrapper[5108]: I1212 14:15:45.509329 5108 generic.go:358] "Generic (PLEG): container finished" podID="37cda489-68c9-43d3-bc72-56593f33ca6f" containerID="ab6c10aa75c0cc64dd355161b2122390822c0d0bf32d6862c1acb6c6eb5594bf" exitCode=0 Dec 12 14:15:45 crc kubenswrapper[5108]: I1212 14:15:45.509437 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6tf52" event={"ID":"37cda489-68c9-43d3-bc72-56593f33ca6f","Type":"ContainerDied","Data":"ab6c10aa75c0cc64dd355161b2122390822c0d0bf32d6862c1acb6c6eb5594bf"} Dec 12 14:15:45 crc kubenswrapper[5108]: I1212 14:15:45.509490 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6tf52" event={"ID":"37cda489-68c9-43d3-bc72-56593f33ca6f","Type":"ContainerStarted","Data":"75213e781c74cbecf721de6686ac636e343cbb95cf540ad4516c8381493d08af"} Dec 12 14:15:45 crc kubenswrapper[5108]: I1212 14:15:45.512873 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k2hqp" event={"ID":"d71f524c-38b4-4eca-b9d2-a0dc97e4ef02","Type":"ContainerStarted","Data":"345ea8c65846f70879a814597a817c04c8a57a60c4cd21b1c05ff659eb1f0a4a"} Dec 12 14:15:45 crc kubenswrapper[5108]: I1212 14:15:45.556537 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-k2hqp" podStartSLOduration=3.358867299 podStartE2EDuration="4.556518485s" podCreationTimestamp="2025-12-12 14:15:41 +0000 UTC" firstStartedPulling="2025-12-12 14:15:42.455539207 +0000 UTC m=+295.363530366" lastFinishedPulling="2025-12-12 14:15:43.653190393 +0000 UTC m=+296.561181552" observedRunningTime="2025-12-12 14:15:45.554245874 +0000 UTC m=+298.462237033" watchObservedRunningTime="2025-12-12 14:15:45.556518485 +0000 UTC m=+298.464509644" Dec 12 14:15:45 crc kubenswrapper[5108]: I1212 14:15:45.740362 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29425815-zsd42" Dec 12 14:15:45 crc kubenswrapper[5108]: I1212 14:15:45.837130 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/16c0779f-318f-4ca3-ae9f-6a4954d8d814-config-volume\") pod \"16c0779f-318f-4ca3-ae9f-6a4954d8d814\" (UID: \"16c0779f-318f-4ca3-ae9f-6a4954d8d814\") " Dec 12 14:15:45 crc kubenswrapper[5108]: I1212 14:15:45.837232 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/16c0779f-318f-4ca3-ae9f-6a4954d8d814-secret-volume\") pod \"16c0779f-318f-4ca3-ae9f-6a4954d8d814\" (UID: \"16c0779f-318f-4ca3-ae9f-6a4954d8d814\") " Dec 12 14:15:45 crc kubenswrapper[5108]: I1212 14:15:45.837334 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bpss7\" (UniqueName: \"kubernetes.io/projected/16c0779f-318f-4ca3-ae9f-6a4954d8d814-kube-api-access-bpss7\") pod \"16c0779f-318f-4ca3-ae9f-6a4954d8d814\" (UID: \"16c0779f-318f-4ca3-ae9f-6a4954d8d814\") " Dec 12 14:15:45 crc kubenswrapper[5108]: I1212 14:15:45.837659 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16c0779f-318f-4ca3-ae9f-6a4954d8d814-config-volume" (OuterVolumeSpecName: "config-volume") pod "16c0779f-318f-4ca3-ae9f-6a4954d8d814" (UID: "16c0779f-318f-4ca3-ae9f-6a4954d8d814"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:15:45 crc kubenswrapper[5108]: I1212 14:15:45.843196 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16c0779f-318f-4ca3-ae9f-6a4954d8d814-kube-api-access-bpss7" (OuterVolumeSpecName: "kube-api-access-bpss7") pod "16c0779f-318f-4ca3-ae9f-6a4954d8d814" (UID: "16c0779f-318f-4ca3-ae9f-6a4954d8d814"). InnerVolumeSpecName "kube-api-access-bpss7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:15:45 crc kubenswrapper[5108]: I1212 14:15:45.845062 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16c0779f-318f-4ca3-ae9f-6a4954d8d814-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "16c0779f-318f-4ca3-ae9f-6a4954d8d814" (UID: "16c0779f-318f-4ca3-ae9f-6a4954d8d814"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:15:45 crc kubenswrapper[5108]: I1212 14:15:45.938883 5108 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/16c0779f-318f-4ca3-ae9f-6a4954d8d814-config-volume\") on node \"crc\" DevicePath \"\"" Dec 12 14:15:45 crc kubenswrapper[5108]: I1212 14:15:45.938913 5108 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/16c0779f-318f-4ca3-ae9f-6a4954d8d814-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 12 14:15:45 crc kubenswrapper[5108]: I1212 14:15:45.938922 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-bpss7\" (UniqueName: \"kubernetes.io/projected/16c0779f-318f-4ca3-ae9f-6a4954d8d814-kube-api-access-bpss7\") on node \"crc\" DevicePath \"\"" Dec 12 14:15:46 crc kubenswrapper[5108]: I1212 14:15:46.521065 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29425815-zsd42" Dec 12 14:15:46 crc kubenswrapper[5108]: I1212 14:15:46.521119 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29425815-zsd42" event={"ID":"16c0779f-318f-4ca3-ae9f-6a4954d8d814","Type":"ContainerDied","Data":"f936ab247981df26ad168adb81fc02e6386dd7973cc6ff5c3002d4851efd38a1"} Dec 12 14:15:46 crc kubenswrapper[5108]: I1212 14:15:46.521410 5108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f936ab247981df26ad168adb81fc02e6386dd7973cc6ff5c3002d4851efd38a1" Dec 12 14:15:46 crc kubenswrapper[5108]: I1212 14:15:46.523897 5108 generic.go:358] "Generic (PLEG): container finished" podID="ba7bd770-9185-40a3-ae63-961ee83bd38e" containerID="4bb3fc51896ffda51a2adb0138055a344158f932a7984c30dadec347e43bf5a6" exitCode=0 Dec 12 14:15:46 crc kubenswrapper[5108]: I1212 14:15:46.524593 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mkdxn" event={"ID":"ba7bd770-9185-40a3-ae63-961ee83bd38e","Type":"ContainerDied","Data":"4bb3fc51896ffda51a2adb0138055a344158f932a7984c30dadec347e43bf5a6"} Dec 12 14:15:47 crc kubenswrapper[5108]: I1212 14:15:47.537261 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p7mr7" event={"ID":"b817ef38-a171-4c25-95d7-f3e73a2c56c7","Type":"ContainerStarted","Data":"0683f268eed775c0e15e87cfb4ec683368d7f64b3566a19950d99fb2a80a8186"} Dec 12 14:15:47 crc kubenswrapper[5108]: I1212 14:15:47.544370 5108 generic.go:358] "Generic (PLEG): container finished" podID="37cda489-68c9-43d3-bc72-56593f33ca6f" containerID="fb9814263eec646562ccba9c58e8450f06807e50c74cab05761412b0170c0300" exitCode=0 Dec 12 14:15:47 crc kubenswrapper[5108]: I1212 14:15:47.544576 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6tf52" event={"ID":"37cda489-68c9-43d3-bc72-56593f33ca6f","Type":"ContainerDied","Data":"fb9814263eec646562ccba9c58e8450f06807e50c74cab05761412b0170c0300"} Dec 12 14:15:47 crc kubenswrapper[5108]: I1212 14:15:47.550188 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mkdxn" event={"ID":"ba7bd770-9185-40a3-ae63-961ee83bd38e","Type":"ContainerStarted","Data":"9cdd46b794513151581bbd5c8662d1ea1f953faa845143fabe20e1b42821df42"} Dec 12 14:15:47 crc kubenswrapper[5108]: I1212 14:15:47.600265 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-mkdxn" podStartSLOduration=4.711955132 podStartE2EDuration="5.60006093s" podCreationTimestamp="2025-12-12 14:15:42 +0000 UTC" firstStartedPulling="2025-12-12 14:15:44.502724792 +0000 UTC m=+297.410715951" lastFinishedPulling="2025-12-12 14:15:45.39083059 +0000 UTC m=+298.298821749" observedRunningTime="2025-12-12 14:15:47.599024172 +0000 UTC m=+300.507015341" watchObservedRunningTime="2025-12-12 14:15:47.60006093 +0000 UTC m=+300.508052099" Dec 12 14:15:47 crc kubenswrapper[5108]: I1212 14:15:47.610378 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Dec 12 14:15:47 crc kubenswrapper[5108]: I1212 14:15:47.610640 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Dec 12 14:15:48 crc kubenswrapper[5108]: I1212 14:15:48.560421 5108 generic.go:358] "Generic (PLEG): container finished" podID="b817ef38-a171-4c25-95d7-f3e73a2c56c7" containerID="0683f268eed775c0e15e87cfb4ec683368d7f64b3566a19950d99fb2a80a8186" exitCode=0 Dec 12 14:15:48 crc kubenswrapper[5108]: I1212 14:15:48.560469 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p7mr7" event={"ID":"b817ef38-a171-4c25-95d7-f3e73a2c56c7","Type":"ContainerDied","Data":"0683f268eed775c0e15e87cfb4ec683368d7f64b3566a19950d99fb2a80a8186"} Dec 12 14:15:48 crc kubenswrapper[5108]: I1212 14:15:48.562444 5108 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 12 14:15:48 crc kubenswrapper[5108]: I1212 14:15:48.564353 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6tf52" event={"ID":"37cda489-68c9-43d3-bc72-56593f33ca6f","Type":"ContainerStarted","Data":"6a6e8aed0783dda7522a6c4b7004961e0c5a35ae01e773a5d75ec195b8ec3278"} Dec 12 14:15:48 crc kubenswrapper[5108]: I1212 14:15:48.602230 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-6tf52" podStartSLOduration=3.684912407 podStartE2EDuration="4.602209569s" podCreationTimestamp="2025-12-12 14:15:44 +0000 UTC" firstStartedPulling="2025-12-12 14:15:45.510510054 +0000 UTC m=+298.418501213" lastFinishedPulling="2025-12-12 14:15:46.427807216 +0000 UTC m=+299.335798375" observedRunningTime="2025-12-12 14:15:48.599244708 +0000 UTC m=+301.507235887" watchObservedRunningTime="2025-12-12 14:15:48.602209569 +0000 UTC m=+301.510200728" Dec 12 14:15:49 crc kubenswrapper[5108]: I1212 14:15:49.573425 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p7mr7" event={"ID":"b817ef38-a171-4c25-95d7-f3e73a2c56c7","Type":"ContainerStarted","Data":"aed00726b30e9a307eea07b327d5e7c9d926248a8cc8b649dd15686ec13041ee"} Dec 12 14:15:49 crc kubenswrapper[5108]: I1212 14:15:49.591517 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-p7mr7" podStartSLOduration=3.728220374 podStartE2EDuration="8.591483128s" podCreationTimestamp="2025-12-12 14:15:41 +0000 UTC" firstStartedPulling="2025-12-12 14:15:42.454194782 +0000 UTC m=+295.362185941" lastFinishedPulling="2025-12-12 14:15:47.317457536 +0000 UTC m=+300.225448695" observedRunningTime="2025-12-12 14:15:49.591337484 +0000 UTC m=+302.499328663" watchObservedRunningTime="2025-12-12 14:15:49.591483128 +0000 UTC m=+302.499474287" Dec 12 14:15:49 crc kubenswrapper[5108]: I1212 14:15:49.986365 5108 patch_prober.go:28] interesting pod/machine-config-daemon-w294k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 14:15:49 crc kubenswrapper[5108]: I1212 14:15:49.986627 5108 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w294k" podUID="fcb30c12-8b29-461d-ab3e-a76577b664d6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 14:15:49 crc kubenswrapper[5108]: I1212 14:15:49.986667 5108 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-w294k" Dec 12 14:15:49 crc kubenswrapper[5108]: I1212 14:15:49.987206 5108 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"59bb44262fa109c767656d9eb9c0c339275e7515ac478a28e10f263d2cb3e961"} pod="openshift-machine-config-operator/machine-config-daemon-w294k" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 12 14:15:49 crc kubenswrapper[5108]: I1212 14:15:49.987259 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-w294k" podUID="fcb30c12-8b29-461d-ab3e-a76577b664d6" containerName="machine-config-daemon" containerID="cri-o://59bb44262fa109c767656d9eb9c0c339275e7515ac478a28e10f263d2cb3e961" gracePeriod=600 Dec 12 14:15:50 crc kubenswrapper[5108]: I1212 14:15:50.580173 5108 generic.go:358] "Generic (PLEG): container finished" podID="fcb30c12-8b29-461d-ab3e-a76577b664d6" containerID="59bb44262fa109c767656d9eb9c0c339275e7515ac478a28e10f263d2cb3e961" exitCode=0 Dec 12 14:15:50 crc kubenswrapper[5108]: I1212 14:15:50.580277 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w294k" event={"ID":"fcb30c12-8b29-461d-ab3e-a76577b664d6","Type":"ContainerDied","Data":"59bb44262fa109c767656d9eb9c0c339275e7515ac478a28e10f263d2cb3e961"} Dec 12 14:15:51 crc kubenswrapper[5108]: I1212 14:15:51.414102 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-k2hqp" Dec 12 14:15:51 crc kubenswrapper[5108]: I1212 14:15:51.414142 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-k2hqp" Dec 12 14:15:51 crc kubenswrapper[5108]: I1212 14:15:51.450091 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-k2hqp" Dec 12 14:15:51 crc kubenswrapper[5108]: I1212 14:15:51.589227 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w294k" event={"ID":"fcb30c12-8b29-461d-ab3e-a76577b664d6","Type":"ContainerStarted","Data":"7adea82340555f4f84d2706f521506fd50722f67e31c512652c717dcaa73a33c"} Dec 12 14:15:51 crc kubenswrapper[5108]: I1212 14:15:51.623510 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-p7mr7" Dec 12 14:15:51 crc kubenswrapper[5108]: I1212 14:15:51.623561 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-p7mr7" Dec 12 14:15:51 crc kubenswrapper[5108]: I1212 14:15:51.632471 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-k2hqp" Dec 12 14:15:51 crc kubenswrapper[5108]: I1212 14:15:51.689335 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-p7mr7" Dec 12 14:15:53 crc kubenswrapper[5108]: I1212 14:15:53.282307 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-mkdxn" Dec 12 14:15:53 crc kubenswrapper[5108]: I1212 14:15:53.283288 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-mkdxn" Dec 12 14:15:53 crc kubenswrapper[5108]: I1212 14:15:53.341775 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-mkdxn" Dec 12 14:15:53 crc kubenswrapper[5108]: I1212 14:15:53.652604 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-mkdxn" Dec 12 14:15:53 crc kubenswrapper[5108]: I1212 14:15:53.657301 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-p7mr7" Dec 12 14:15:53 crc kubenswrapper[5108]: I1212 14:15:53.956962 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-8677dbb44d-xbgwl"] Dec 12 14:15:53 crc kubenswrapper[5108]: I1212 14:15:53.957251 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-8677dbb44d-xbgwl" podUID="f8e139ce-63ec-4858-8765-013c898fc41a" containerName="controller-manager" containerID="cri-o://d60cc0c0a2bc5478b18e977ef788bc34c9f7a287a6856d3b66c9858543bba021" gracePeriod=30 Dec 12 14:15:54 crc kubenswrapper[5108]: I1212 14:15:54.456682 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-6tf52" Dec 12 14:15:54 crc kubenswrapper[5108]: I1212 14:15:54.456894 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-6tf52" Dec 12 14:15:54 crc kubenswrapper[5108]: I1212 14:15:54.517964 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-6tf52" Dec 12 14:15:54 crc kubenswrapper[5108]: I1212 14:15:54.607921 5108 generic.go:358] "Generic (PLEG): container finished" podID="f8e139ce-63ec-4858-8765-013c898fc41a" containerID="d60cc0c0a2bc5478b18e977ef788bc34c9f7a287a6856d3b66c9858543bba021" exitCode=0 Dec 12 14:15:54 crc kubenswrapper[5108]: I1212 14:15:54.608962 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-8677dbb44d-xbgwl" event={"ID":"f8e139ce-63ec-4858-8765-013c898fc41a","Type":"ContainerDied","Data":"d60cc0c0a2bc5478b18e977ef788bc34c9f7a287a6856d3b66c9858543bba021"} Dec 12 14:15:54 crc kubenswrapper[5108]: I1212 14:15:54.670455 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-6tf52" Dec 12 14:15:54 crc kubenswrapper[5108]: I1212 14:15:54.726292 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8677dbb44d-xbgwl" Dec 12 14:15:54 crc kubenswrapper[5108]: I1212 14:15:54.753805 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7c45c679d8-9dbdt"] Dec 12 14:15:54 crc kubenswrapper[5108]: I1212 14:15:54.754530 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f8e139ce-63ec-4858-8765-013c898fc41a" containerName="controller-manager" Dec 12 14:15:54 crc kubenswrapper[5108]: I1212 14:15:54.754608 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8e139ce-63ec-4858-8765-013c898fc41a" containerName="controller-manager" Dec 12 14:15:54 crc kubenswrapper[5108]: I1212 14:15:54.754684 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="16c0779f-318f-4ca3-ae9f-6a4954d8d814" containerName="collect-profiles" Dec 12 14:15:54 crc kubenswrapper[5108]: I1212 14:15:54.754746 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="16c0779f-318f-4ca3-ae9f-6a4954d8d814" containerName="collect-profiles" Dec 12 14:15:54 crc kubenswrapper[5108]: I1212 14:15:54.754892 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="16c0779f-318f-4ca3-ae9f-6a4954d8d814" containerName="collect-profiles" Dec 12 14:15:54 crc kubenswrapper[5108]: I1212 14:15:54.763178 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="f8e139ce-63ec-4858-8765-013c898fc41a" containerName="controller-manager" Dec 12 14:15:54 crc kubenswrapper[5108]: I1212 14:15:54.775703 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f8e139ce-63ec-4858-8765-013c898fc41a-config\") pod \"f8e139ce-63ec-4858-8765-013c898fc41a\" (UID: \"f8e139ce-63ec-4858-8765-013c898fc41a\") " Dec 12 14:15:54 crc kubenswrapper[5108]: I1212 14:15:54.775780 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f8e139ce-63ec-4858-8765-013c898fc41a-serving-cert\") pod \"f8e139ce-63ec-4858-8765-013c898fc41a\" (UID: \"f8e139ce-63ec-4858-8765-013c898fc41a\") " Dec 12 14:15:54 crc kubenswrapper[5108]: I1212 14:15:54.775810 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nhs6s\" (UniqueName: \"kubernetes.io/projected/f8e139ce-63ec-4858-8765-013c898fc41a-kube-api-access-nhs6s\") pod \"f8e139ce-63ec-4858-8765-013c898fc41a\" (UID: \"f8e139ce-63ec-4858-8765-013c898fc41a\") " Dec 12 14:15:54 crc kubenswrapper[5108]: I1212 14:15:54.775904 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f8e139ce-63ec-4858-8765-013c898fc41a-tmp\") pod \"f8e139ce-63ec-4858-8765-013c898fc41a\" (UID: \"f8e139ce-63ec-4858-8765-013c898fc41a\") " Dec 12 14:15:54 crc kubenswrapper[5108]: I1212 14:15:54.775925 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f8e139ce-63ec-4858-8765-013c898fc41a-proxy-ca-bundles\") pod \"f8e139ce-63ec-4858-8765-013c898fc41a\" (UID: \"f8e139ce-63ec-4858-8765-013c898fc41a\") " Dec 12 14:15:54 crc kubenswrapper[5108]: I1212 14:15:54.775947 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f8e139ce-63ec-4858-8765-013c898fc41a-client-ca\") pod \"f8e139ce-63ec-4858-8765-013c898fc41a\" (UID: \"f8e139ce-63ec-4858-8765-013c898fc41a\") " Dec 12 14:15:54 crc kubenswrapper[5108]: I1212 14:15:54.776992 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f8e139ce-63ec-4858-8765-013c898fc41a-client-ca" (OuterVolumeSpecName: "client-ca") pod "f8e139ce-63ec-4858-8765-013c898fc41a" (UID: "f8e139ce-63ec-4858-8765-013c898fc41a"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:15:54 crc kubenswrapper[5108]: I1212 14:15:54.777409 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f8e139ce-63ec-4858-8765-013c898fc41a-tmp" (OuterVolumeSpecName: "tmp") pod "f8e139ce-63ec-4858-8765-013c898fc41a" (UID: "f8e139ce-63ec-4858-8765-013c898fc41a"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:15:54 crc kubenswrapper[5108]: I1212 14:15:54.777860 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f8e139ce-63ec-4858-8765-013c898fc41a-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "f8e139ce-63ec-4858-8765-013c898fc41a" (UID: "f8e139ce-63ec-4858-8765-013c898fc41a"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:15:54 crc kubenswrapper[5108]: I1212 14:15:54.778367 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f8e139ce-63ec-4858-8765-013c898fc41a-config" (OuterVolumeSpecName: "config") pod "f8e139ce-63ec-4858-8765-013c898fc41a" (UID: "f8e139ce-63ec-4858-8765-013c898fc41a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:15:54 crc kubenswrapper[5108]: I1212 14:15:54.877276 5108 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f8e139ce-63ec-4858-8765-013c898fc41a-tmp\") on node \"crc\" DevicePath \"\"" Dec 12 14:15:54 crc kubenswrapper[5108]: I1212 14:15:54.877313 5108 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f8e139ce-63ec-4858-8765-013c898fc41a-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 12 14:15:54 crc kubenswrapper[5108]: I1212 14:15:54.877323 5108 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f8e139ce-63ec-4858-8765-013c898fc41a-client-ca\") on node \"crc\" DevicePath \"\"" Dec 12 14:15:54 crc kubenswrapper[5108]: I1212 14:15:54.877333 5108 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f8e139ce-63ec-4858-8765-013c898fc41a-config\") on node \"crc\" DevicePath \"\"" Dec 12 14:15:55 crc kubenswrapper[5108]: I1212 14:15:55.180802 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8e139ce-63ec-4858-8765-013c898fc41a-kube-api-access-nhs6s" (OuterVolumeSpecName: "kube-api-access-nhs6s") pod "f8e139ce-63ec-4858-8765-013c898fc41a" (UID: "f8e139ce-63ec-4858-8765-013c898fc41a"). InnerVolumeSpecName "kube-api-access-nhs6s". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:15:55 crc kubenswrapper[5108]: I1212 14:15:55.188008 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nhs6s\" (UniqueName: \"kubernetes.io/projected/f8e139ce-63ec-4858-8765-013c898fc41a-kube-api-access-nhs6s\") on node \"crc\" DevicePath \"\"" Dec 12 14:15:55 crc kubenswrapper[5108]: I1212 14:15:55.189823 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8e139ce-63ec-4858-8765-013c898fc41a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f8e139ce-63ec-4858-8765-013c898fc41a" (UID: "f8e139ce-63ec-4858-8765-013c898fc41a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:15:55 crc kubenswrapper[5108]: I1212 14:15:55.289150 5108 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f8e139ce-63ec-4858-8765-013c898fc41a-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 14:15:55 crc kubenswrapper[5108]: I1212 14:15:55.449354 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7c45c679d8-9dbdt" Dec 12 14:15:55 crc kubenswrapper[5108]: I1212 14:15:55.465197 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7c45c679d8-9dbdt"] Dec 12 14:15:55 crc kubenswrapper[5108]: I1212 14:15:55.492245 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76327d91-fc4b-420f-8fbf-3eecaea1ef04-config\") pod \"controller-manager-7c45c679d8-9dbdt\" (UID: \"76327d91-fc4b-420f-8fbf-3eecaea1ef04\") " pod="openshift-controller-manager/controller-manager-7c45c679d8-9dbdt" Dec 12 14:15:55 crc kubenswrapper[5108]: I1212 14:15:55.492357 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/76327d91-fc4b-420f-8fbf-3eecaea1ef04-client-ca\") pod \"controller-manager-7c45c679d8-9dbdt\" (UID: \"76327d91-fc4b-420f-8fbf-3eecaea1ef04\") " pod="openshift-controller-manager/controller-manager-7c45c679d8-9dbdt" Dec 12 14:15:55 crc kubenswrapper[5108]: I1212 14:15:55.492401 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/76327d91-fc4b-420f-8fbf-3eecaea1ef04-serving-cert\") pod \"controller-manager-7c45c679d8-9dbdt\" (UID: \"76327d91-fc4b-420f-8fbf-3eecaea1ef04\") " pod="openshift-controller-manager/controller-manager-7c45c679d8-9dbdt" Dec 12 14:15:55 crc kubenswrapper[5108]: I1212 14:15:55.492418 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/76327d91-fc4b-420f-8fbf-3eecaea1ef04-tmp\") pod \"controller-manager-7c45c679d8-9dbdt\" (UID: \"76327d91-fc4b-420f-8fbf-3eecaea1ef04\") " pod="openshift-controller-manager/controller-manager-7c45c679d8-9dbdt" Dec 12 14:15:55 crc kubenswrapper[5108]: I1212 14:15:55.492445 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/76327d91-fc4b-420f-8fbf-3eecaea1ef04-proxy-ca-bundles\") pod \"controller-manager-7c45c679d8-9dbdt\" (UID: \"76327d91-fc4b-420f-8fbf-3eecaea1ef04\") " pod="openshift-controller-manager/controller-manager-7c45c679d8-9dbdt" Dec 12 14:15:55 crc kubenswrapper[5108]: I1212 14:15:55.492546 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fck62\" (UniqueName: \"kubernetes.io/projected/76327d91-fc4b-420f-8fbf-3eecaea1ef04-kube-api-access-fck62\") pod \"controller-manager-7c45c679d8-9dbdt\" (UID: \"76327d91-fc4b-420f-8fbf-3eecaea1ef04\") " pod="openshift-controller-manager/controller-manager-7c45c679d8-9dbdt" Dec 12 14:15:55 crc kubenswrapper[5108]: I1212 14:15:55.506043 5108 patch_prober.go:28] interesting pod/controller-manager-8677dbb44d-xbgwl container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.63:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 14:15:55 crc kubenswrapper[5108]: I1212 14:15:55.506192 5108 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-8677dbb44d-xbgwl" podUID="f8e139ce-63ec-4858-8765-013c898fc41a" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.63:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 12 14:15:55 crc kubenswrapper[5108]: I1212 14:15:55.593284 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-fck62\" (UniqueName: \"kubernetes.io/projected/76327d91-fc4b-420f-8fbf-3eecaea1ef04-kube-api-access-fck62\") pod \"controller-manager-7c45c679d8-9dbdt\" (UID: \"76327d91-fc4b-420f-8fbf-3eecaea1ef04\") " pod="openshift-controller-manager/controller-manager-7c45c679d8-9dbdt" Dec 12 14:15:55 crc kubenswrapper[5108]: I1212 14:15:55.593355 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76327d91-fc4b-420f-8fbf-3eecaea1ef04-config\") pod \"controller-manager-7c45c679d8-9dbdt\" (UID: \"76327d91-fc4b-420f-8fbf-3eecaea1ef04\") " pod="openshift-controller-manager/controller-manager-7c45c679d8-9dbdt" Dec 12 14:15:55 crc kubenswrapper[5108]: I1212 14:15:55.593409 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/76327d91-fc4b-420f-8fbf-3eecaea1ef04-client-ca\") pod \"controller-manager-7c45c679d8-9dbdt\" (UID: \"76327d91-fc4b-420f-8fbf-3eecaea1ef04\") " pod="openshift-controller-manager/controller-manager-7c45c679d8-9dbdt" Dec 12 14:15:55 crc kubenswrapper[5108]: I1212 14:15:55.593447 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/76327d91-fc4b-420f-8fbf-3eecaea1ef04-serving-cert\") pod \"controller-manager-7c45c679d8-9dbdt\" (UID: \"76327d91-fc4b-420f-8fbf-3eecaea1ef04\") " pod="openshift-controller-manager/controller-manager-7c45c679d8-9dbdt" Dec 12 14:15:55 crc kubenswrapper[5108]: I1212 14:15:55.593466 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/76327d91-fc4b-420f-8fbf-3eecaea1ef04-tmp\") pod \"controller-manager-7c45c679d8-9dbdt\" (UID: \"76327d91-fc4b-420f-8fbf-3eecaea1ef04\") " pod="openshift-controller-manager/controller-manager-7c45c679d8-9dbdt" Dec 12 14:15:55 crc kubenswrapper[5108]: I1212 14:15:55.593490 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/76327d91-fc4b-420f-8fbf-3eecaea1ef04-proxy-ca-bundles\") pod \"controller-manager-7c45c679d8-9dbdt\" (UID: \"76327d91-fc4b-420f-8fbf-3eecaea1ef04\") " pod="openshift-controller-manager/controller-manager-7c45c679d8-9dbdt" Dec 12 14:15:55 crc kubenswrapper[5108]: I1212 14:15:55.594768 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/76327d91-fc4b-420f-8fbf-3eecaea1ef04-tmp\") pod \"controller-manager-7c45c679d8-9dbdt\" (UID: \"76327d91-fc4b-420f-8fbf-3eecaea1ef04\") " pod="openshift-controller-manager/controller-manager-7c45c679d8-9dbdt" Dec 12 14:15:55 crc kubenswrapper[5108]: I1212 14:15:55.595170 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/76327d91-fc4b-420f-8fbf-3eecaea1ef04-client-ca\") pod \"controller-manager-7c45c679d8-9dbdt\" (UID: \"76327d91-fc4b-420f-8fbf-3eecaea1ef04\") " pod="openshift-controller-manager/controller-manager-7c45c679d8-9dbdt" Dec 12 14:15:55 crc kubenswrapper[5108]: I1212 14:15:55.595321 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/76327d91-fc4b-420f-8fbf-3eecaea1ef04-proxy-ca-bundles\") pod \"controller-manager-7c45c679d8-9dbdt\" (UID: \"76327d91-fc4b-420f-8fbf-3eecaea1ef04\") " pod="openshift-controller-manager/controller-manager-7c45c679d8-9dbdt" Dec 12 14:15:55 crc kubenswrapper[5108]: I1212 14:15:55.595465 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76327d91-fc4b-420f-8fbf-3eecaea1ef04-config\") pod \"controller-manager-7c45c679d8-9dbdt\" (UID: \"76327d91-fc4b-420f-8fbf-3eecaea1ef04\") " pod="openshift-controller-manager/controller-manager-7c45c679d8-9dbdt" Dec 12 14:15:55 crc kubenswrapper[5108]: I1212 14:15:55.601059 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/76327d91-fc4b-420f-8fbf-3eecaea1ef04-serving-cert\") pod \"controller-manager-7c45c679d8-9dbdt\" (UID: \"76327d91-fc4b-420f-8fbf-3eecaea1ef04\") " pod="openshift-controller-manager/controller-manager-7c45c679d8-9dbdt" Dec 12 14:15:55 crc kubenswrapper[5108]: I1212 14:15:55.616193 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-fck62\" (UniqueName: \"kubernetes.io/projected/76327d91-fc4b-420f-8fbf-3eecaea1ef04-kube-api-access-fck62\") pod \"controller-manager-7c45c679d8-9dbdt\" (UID: \"76327d91-fc4b-420f-8fbf-3eecaea1ef04\") " pod="openshift-controller-manager/controller-manager-7c45c679d8-9dbdt" Dec 12 14:15:55 crc kubenswrapper[5108]: I1212 14:15:55.623049 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8677dbb44d-xbgwl" Dec 12 14:15:55 crc kubenswrapper[5108]: I1212 14:15:55.623073 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-8677dbb44d-xbgwl" event={"ID":"f8e139ce-63ec-4858-8765-013c898fc41a","Type":"ContainerDied","Data":"f032e8950b74cf24d7930d21fa0789f4410f1ccf13537cc4b743cb96640d58d6"} Dec 12 14:15:55 crc kubenswrapper[5108]: I1212 14:15:55.623222 5108 scope.go:117] "RemoveContainer" containerID="d60cc0c0a2bc5478b18e977ef788bc34c9f7a287a6856d3b66c9858543bba021" Dec 12 14:15:55 crc kubenswrapper[5108]: I1212 14:15:55.650964 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-8677dbb44d-xbgwl"] Dec 12 14:15:55 crc kubenswrapper[5108]: I1212 14:15:55.655422 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-8677dbb44d-xbgwl"] Dec 12 14:15:55 crc kubenswrapper[5108]: I1212 14:15:55.775850 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7c45c679d8-9dbdt" Dec 12 14:15:55 crc kubenswrapper[5108]: I1212 14:15:55.983041 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7c45c679d8-9dbdt"] Dec 12 14:15:56 crc kubenswrapper[5108]: I1212 14:15:56.631503 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7c45c679d8-9dbdt" event={"ID":"76327d91-fc4b-420f-8fbf-3eecaea1ef04","Type":"ContainerStarted","Data":"ace845bd18fa459963bdb801b2f944f3e808b45a9176ec88265ef1617a689e1c"} Dec 12 14:15:56 crc kubenswrapper[5108]: I1212 14:15:56.632821 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7c45c679d8-9dbdt" event={"ID":"76327d91-fc4b-420f-8fbf-3eecaea1ef04","Type":"ContainerStarted","Data":"c5fa81091bd6bf43d7be004f2f1f48e19ba14905ab36dccee902085a24d41433"} Dec 12 14:15:56 crc kubenswrapper[5108]: I1212 14:15:56.632948 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-7c45c679d8-9dbdt" Dec 12 14:15:56 crc kubenswrapper[5108]: I1212 14:15:56.653177 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7c45c679d8-9dbdt" podStartSLOduration=3.653156889 podStartE2EDuration="3.653156889s" podCreationTimestamp="2025-12-12 14:15:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 14:15:56.652163782 +0000 UTC m=+309.560154951" watchObservedRunningTime="2025-12-12 14:15:56.653156889 +0000 UTC m=+309.561148048" Dec 12 14:15:57 crc kubenswrapper[5108]: I1212 14:15:57.095369 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7c45c679d8-9dbdt" Dec 12 14:15:57 crc kubenswrapper[5108]: I1212 14:15:57.414870 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8e139ce-63ec-4858-8765-013c898fc41a" path="/var/lib/kubelet/pods/f8e139ce-63ec-4858-8765-013c898fc41a/volumes" Dec 12 14:16:07 crc kubenswrapper[5108]: I1212 14:16:07.660507 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-66458b6674-blzxz" podUID="9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26" containerName="oauth-openshift" containerID="cri-o://2bdc3adf14961fcca4e288424a14d95f0a1dba6c91b73b18042b6a5bdd0429f1" gracePeriod=15 Dec 12 14:16:08 crc kubenswrapper[5108]: I1212 14:16:08.070557 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-blzxz" Dec 12 14:16:08 crc kubenswrapper[5108]: I1212 14:16:08.103524 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-6ddbd69885-gmd66"] Dec 12 14:16:08 crc kubenswrapper[5108]: I1212 14:16:08.104211 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26" containerName="oauth-openshift" Dec 12 14:16:08 crc kubenswrapper[5108]: I1212 14:16:08.104225 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26" containerName="oauth-openshift" Dec 12 14:16:08 crc kubenswrapper[5108]: I1212 14:16:08.104370 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26" containerName="oauth-openshift" Dec 12 14:16:08 crc kubenswrapper[5108]: I1212 14:16:08.116671 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6ddbd69885-gmd66"] Dec 12 14:16:08 crc kubenswrapper[5108]: I1212 14:16:08.116803 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6ddbd69885-gmd66" Dec 12 14:16:08 crc kubenswrapper[5108]: I1212 14:16:08.142810 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26-v4-0-config-system-serving-cert\") pod \"9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26\" (UID: \"9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26\") " Dec 12 14:16:08 crc kubenswrapper[5108]: I1212 14:16:08.143200 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26-v4-0-config-user-idp-0-file-data\") pod \"9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26\" (UID: \"9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26\") " Dec 12 14:16:08 crc kubenswrapper[5108]: I1212 14:16:08.143237 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26-v4-0-config-system-ocp-branding-template\") pod \"9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26\" (UID: \"9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26\") " Dec 12 14:16:08 crc kubenswrapper[5108]: I1212 14:16:08.143273 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26-audit-dir\") pod \"9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26\" (UID: \"9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26\") " Dec 12 14:16:08 crc kubenswrapper[5108]: I1212 14:16:08.143304 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26-audit-policies\") pod \"9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26\" (UID: \"9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26\") " Dec 12 14:16:08 crc kubenswrapper[5108]: I1212 14:16:08.143342 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26-v4-0-config-system-session\") pod \"9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26\" (UID: \"9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26\") " Dec 12 14:16:08 crc kubenswrapper[5108]: I1212 14:16:08.143373 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26-v4-0-config-system-service-ca\") pod \"9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26\" (UID: \"9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26\") " Dec 12 14:16:08 crc kubenswrapper[5108]: I1212 14:16:08.143399 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26-v4-0-config-user-template-error\") pod \"9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26\" (UID: \"9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26\") " Dec 12 14:16:08 crc kubenswrapper[5108]: I1212 14:16:08.143462 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26-v4-0-config-system-router-certs\") pod \"9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26\" (UID: \"9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26\") " Dec 12 14:16:08 crc kubenswrapper[5108]: I1212 14:16:08.143492 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26-v4-0-config-user-template-provider-selection\") pod \"9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26\" (UID: \"9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26\") " Dec 12 14:16:08 crc kubenswrapper[5108]: I1212 14:16:08.143535 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26-v4-0-config-user-template-login\") pod \"9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26\" (UID: \"9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26\") " Dec 12 14:16:08 crc kubenswrapper[5108]: I1212 14:16:08.143593 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26-v4-0-config-system-cliconfig\") pod \"9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26\" (UID: \"9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26\") " Dec 12 14:16:08 crc kubenswrapper[5108]: I1212 14:16:08.143626 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26-v4-0-config-system-trusted-ca-bundle\") pod \"9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26\" (UID: \"9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26\") " Dec 12 14:16:08 crc kubenswrapper[5108]: I1212 14:16:08.143659 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s6s8q\" (UniqueName: \"kubernetes.io/projected/9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26-kube-api-access-s6s8q\") pod \"9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26\" (UID: \"9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26\") " Dec 12 14:16:08 crc kubenswrapper[5108]: I1212 14:16:08.145058 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26" (UID: "9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:16:08 crc kubenswrapper[5108]: I1212 14:16:08.145488 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26" (UID: "9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:16:08 crc kubenswrapper[5108]: I1212 14:16:08.145562 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26" (UID: "9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 14:16:08 crc kubenswrapper[5108]: I1212 14:16:08.145608 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26" (UID: "9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:16:08 crc kubenswrapper[5108]: I1212 14:16:08.145700 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/fea084c2-3c2c-4f83-84ad-fa54c63ebaf1-v4-0-config-system-session\") pod \"oauth-openshift-6ddbd69885-gmd66\" (UID: \"fea084c2-3c2c-4f83-84ad-fa54c63ebaf1\") " pod="openshift-authentication/oauth-openshift-6ddbd69885-gmd66" Dec 12 14:16:08 crc kubenswrapper[5108]: I1212 14:16:08.145996 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/fea084c2-3c2c-4f83-84ad-fa54c63ebaf1-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6ddbd69885-gmd66\" (UID: \"fea084c2-3c2c-4f83-84ad-fa54c63ebaf1\") " pod="openshift-authentication/oauth-openshift-6ddbd69885-gmd66" Dec 12 14:16:08 crc kubenswrapper[5108]: I1212 14:16:08.146039 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/fea084c2-3c2c-4f83-84ad-fa54c63ebaf1-v4-0-config-user-template-error\") pod \"oauth-openshift-6ddbd69885-gmd66\" (UID: \"fea084c2-3c2c-4f83-84ad-fa54c63ebaf1\") " pod="openshift-authentication/oauth-openshift-6ddbd69885-gmd66" Dec 12 14:16:08 crc kubenswrapper[5108]: I1212 14:16:08.146068 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fea084c2-3c2c-4f83-84ad-fa54c63ebaf1-audit-dir\") pod \"oauth-openshift-6ddbd69885-gmd66\" (UID: \"fea084c2-3c2c-4f83-84ad-fa54c63ebaf1\") " pod="openshift-authentication/oauth-openshift-6ddbd69885-gmd66" Dec 12 14:16:08 crc kubenswrapper[5108]: I1212 14:16:08.146157 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/fea084c2-3c2c-4f83-84ad-fa54c63ebaf1-v4-0-config-user-template-login\") pod \"oauth-openshift-6ddbd69885-gmd66\" (UID: \"fea084c2-3c2c-4f83-84ad-fa54c63ebaf1\") " pod="openshift-authentication/oauth-openshift-6ddbd69885-gmd66" Dec 12 14:16:08 crc kubenswrapper[5108]: I1212 14:16:08.146206 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/fea084c2-3c2c-4f83-84ad-fa54c63ebaf1-v4-0-config-system-service-ca\") pod \"oauth-openshift-6ddbd69885-gmd66\" (UID: \"fea084c2-3c2c-4f83-84ad-fa54c63ebaf1\") " pod="openshift-authentication/oauth-openshift-6ddbd69885-gmd66" Dec 12 14:16:08 crc kubenswrapper[5108]: I1212 14:16:08.146238 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/fea084c2-3c2c-4f83-84ad-fa54c63ebaf1-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6ddbd69885-gmd66\" (UID: \"fea084c2-3c2c-4f83-84ad-fa54c63ebaf1\") " pod="openshift-authentication/oauth-openshift-6ddbd69885-gmd66" Dec 12 14:16:08 crc kubenswrapper[5108]: I1212 14:16:08.146282 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26" (UID: "9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:16:08 crc kubenswrapper[5108]: I1212 14:16:08.146286 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xj4h8\" (UniqueName: \"kubernetes.io/projected/fea084c2-3c2c-4f83-84ad-fa54c63ebaf1-kube-api-access-xj4h8\") pod \"oauth-openshift-6ddbd69885-gmd66\" (UID: \"fea084c2-3c2c-4f83-84ad-fa54c63ebaf1\") " pod="openshift-authentication/oauth-openshift-6ddbd69885-gmd66" Dec 12 14:16:08 crc kubenswrapper[5108]: I1212 14:16:08.146562 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/fea084c2-3c2c-4f83-84ad-fa54c63ebaf1-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6ddbd69885-gmd66\" (UID: \"fea084c2-3c2c-4f83-84ad-fa54c63ebaf1\") " pod="openshift-authentication/oauth-openshift-6ddbd69885-gmd66" Dec 12 14:16:08 crc kubenswrapper[5108]: I1212 14:16:08.146621 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/fea084c2-3c2c-4f83-84ad-fa54c63ebaf1-v4-0-config-system-router-certs\") pod \"oauth-openshift-6ddbd69885-gmd66\" (UID: \"fea084c2-3c2c-4f83-84ad-fa54c63ebaf1\") " pod="openshift-authentication/oauth-openshift-6ddbd69885-gmd66" Dec 12 14:16:08 crc kubenswrapper[5108]: I1212 14:16:08.146668 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/fea084c2-3c2c-4f83-84ad-fa54c63ebaf1-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6ddbd69885-gmd66\" (UID: \"fea084c2-3c2c-4f83-84ad-fa54c63ebaf1\") " pod="openshift-authentication/oauth-openshift-6ddbd69885-gmd66" Dec 12 14:16:08 crc kubenswrapper[5108]: I1212 14:16:08.146686 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/fea084c2-3c2c-4f83-84ad-fa54c63ebaf1-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6ddbd69885-gmd66\" (UID: \"fea084c2-3c2c-4f83-84ad-fa54c63ebaf1\") " pod="openshift-authentication/oauth-openshift-6ddbd69885-gmd66" Dec 12 14:16:08 crc kubenswrapper[5108]: I1212 14:16:08.146712 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/fea084c2-3c2c-4f83-84ad-fa54c63ebaf1-audit-policies\") pod \"oauth-openshift-6ddbd69885-gmd66\" (UID: \"fea084c2-3c2c-4f83-84ad-fa54c63ebaf1\") " pod="openshift-authentication/oauth-openshift-6ddbd69885-gmd66" Dec 12 14:16:08 crc kubenswrapper[5108]: I1212 14:16:08.146740 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fea084c2-3c2c-4f83-84ad-fa54c63ebaf1-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6ddbd69885-gmd66\" (UID: \"fea084c2-3c2c-4f83-84ad-fa54c63ebaf1\") " pod="openshift-authentication/oauth-openshift-6ddbd69885-gmd66" Dec 12 14:16:08 crc kubenswrapper[5108]: I1212 14:16:08.146788 5108 reconciler_common.go:299] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 12 14:16:08 crc kubenswrapper[5108]: I1212 14:16:08.146799 5108 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 12 14:16:08 crc kubenswrapper[5108]: I1212 14:16:08.146808 5108 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 12 14:16:08 crc kubenswrapper[5108]: I1212 14:16:08.146819 5108 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 12 14:16:08 crc kubenswrapper[5108]: I1212 14:16:08.146828 5108 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 14:16:08 crc kubenswrapper[5108]: I1212 14:16:08.148718 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26" (UID: "9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:16:08 crc kubenswrapper[5108]: I1212 14:16:08.149054 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26" (UID: "9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:16:08 crc kubenswrapper[5108]: I1212 14:16:08.149376 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26" (UID: "9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:16:08 crc kubenswrapper[5108]: I1212 14:16:08.150405 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26" (UID: "9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:16:08 crc kubenswrapper[5108]: I1212 14:16:08.150640 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26" (UID: "9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:16:08 crc kubenswrapper[5108]: I1212 14:16:08.150944 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26" (UID: "9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:16:08 crc kubenswrapper[5108]: I1212 14:16:08.151102 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26" (UID: "9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:16:08 crc kubenswrapper[5108]: I1212 14:16:08.159397 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26-kube-api-access-s6s8q" (OuterVolumeSpecName: "kube-api-access-s6s8q") pod "9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26" (UID: "9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26"). InnerVolumeSpecName "kube-api-access-s6s8q". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:16:08 crc kubenswrapper[5108]: I1212 14:16:08.160222 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26" (UID: "9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:16:08 crc kubenswrapper[5108]: I1212 14:16:08.247916 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/fea084c2-3c2c-4f83-84ad-fa54c63ebaf1-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6ddbd69885-gmd66\" (UID: \"fea084c2-3c2c-4f83-84ad-fa54c63ebaf1\") " pod="openshift-authentication/oauth-openshift-6ddbd69885-gmd66" Dec 12 14:16:08 crc kubenswrapper[5108]: I1212 14:16:08.247962 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/fea084c2-3c2c-4f83-84ad-fa54c63ebaf1-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6ddbd69885-gmd66\" (UID: \"fea084c2-3c2c-4f83-84ad-fa54c63ebaf1\") " pod="openshift-authentication/oauth-openshift-6ddbd69885-gmd66" Dec 12 14:16:08 crc kubenswrapper[5108]: I1212 14:16:08.247990 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/fea084c2-3c2c-4f83-84ad-fa54c63ebaf1-audit-policies\") pod \"oauth-openshift-6ddbd69885-gmd66\" (UID: \"fea084c2-3c2c-4f83-84ad-fa54c63ebaf1\") " pod="openshift-authentication/oauth-openshift-6ddbd69885-gmd66" Dec 12 14:16:08 crc kubenswrapper[5108]: I1212 14:16:08.248017 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fea084c2-3c2c-4f83-84ad-fa54c63ebaf1-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6ddbd69885-gmd66\" (UID: \"fea084c2-3c2c-4f83-84ad-fa54c63ebaf1\") " pod="openshift-authentication/oauth-openshift-6ddbd69885-gmd66" Dec 12 14:16:08 crc kubenswrapper[5108]: I1212 14:16:08.248038 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/fea084c2-3c2c-4f83-84ad-fa54c63ebaf1-v4-0-config-system-session\") pod \"oauth-openshift-6ddbd69885-gmd66\" (UID: \"fea084c2-3c2c-4f83-84ad-fa54c63ebaf1\") " pod="openshift-authentication/oauth-openshift-6ddbd69885-gmd66" Dec 12 14:16:08 crc kubenswrapper[5108]: I1212 14:16:08.248057 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/fea084c2-3c2c-4f83-84ad-fa54c63ebaf1-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6ddbd69885-gmd66\" (UID: \"fea084c2-3c2c-4f83-84ad-fa54c63ebaf1\") " pod="openshift-authentication/oauth-openshift-6ddbd69885-gmd66" Dec 12 14:16:08 crc kubenswrapper[5108]: I1212 14:16:08.248098 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/fea084c2-3c2c-4f83-84ad-fa54c63ebaf1-v4-0-config-user-template-error\") pod \"oauth-openshift-6ddbd69885-gmd66\" (UID: \"fea084c2-3c2c-4f83-84ad-fa54c63ebaf1\") " pod="openshift-authentication/oauth-openshift-6ddbd69885-gmd66" Dec 12 14:16:08 crc kubenswrapper[5108]: I1212 14:16:08.248127 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fea084c2-3c2c-4f83-84ad-fa54c63ebaf1-audit-dir\") pod \"oauth-openshift-6ddbd69885-gmd66\" (UID: \"fea084c2-3c2c-4f83-84ad-fa54c63ebaf1\") " pod="openshift-authentication/oauth-openshift-6ddbd69885-gmd66" Dec 12 14:16:08 crc kubenswrapper[5108]: I1212 14:16:08.248150 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/fea084c2-3c2c-4f83-84ad-fa54c63ebaf1-v4-0-config-user-template-login\") pod \"oauth-openshift-6ddbd69885-gmd66\" (UID: \"fea084c2-3c2c-4f83-84ad-fa54c63ebaf1\") " pod="openshift-authentication/oauth-openshift-6ddbd69885-gmd66" Dec 12 14:16:08 crc kubenswrapper[5108]: I1212 14:16:08.248181 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/fea084c2-3c2c-4f83-84ad-fa54c63ebaf1-v4-0-config-system-service-ca\") pod \"oauth-openshift-6ddbd69885-gmd66\" (UID: \"fea084c2-3c2c-4f83-84ad-fa54c63ebaf1\") " pod="openshift-authentication/oauth-openshift-6ddbd69885-gmd66" Dec 12 14:16:08 crc kubenswrapper[5108]: I1212 14:16:08.248204 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/fea084c2-3c2c-4f83-84ad-fa54c63ebaf1-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6ddbd69885-gmd66\" (UID: \"fea084c2-3c2c-4f83-84ad-fa54c63ebaf1\") " pod="openshift-authentication/oauth-openshift-6ddbd69885-gmd66" Dec 12 14:16:08 crc kubenswrapper[5108]: I1212 14:16:08.248230 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xj4h8\" (UniqueName: \"kubernetes.io/projected/fea084c2-3c2c-4f83-84ad-fa54c63ebaf1-kube-api-access-xj4h8\") pod \"oauth-openshift-6ddbd69885-gmd66\" (UID: \"fea084c2-3c2c-4f83-84ad-fa54c63ebaf1\") " pod="openshift-authentication/oauth-openshift-6ddbd69885-gmd66" Dec 12 14:16:08 crc kubenswrapper[5108]: I1212 14:16:08.248305 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/fea084c2-3c2c-4f83-84ad-fa54c63ebaf1-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6ddbd69885-gmd66\" (UID: \"fea084c2-3c2c-4f83-84ad-fa54c63ebaf1\") " pod="openshift-authentication/oauth-openshift-6ddbd69885-gmd66" Dec 12 14:16:08 crc kubenswrapper[5108]: I1212 14:16:08.248345 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/fea084c2-3c2c-4f83-84ad-fa54c63ebaf1-v4-0-config-system-router-certs\") pod \"oauth-openshift-6ddbd69885-gmd66\" (UID: \"fea084c2-3c2c-4f83-84ad-fa54c63ebaf1\") " pod="openshift-authentication/oauth-openshift-6ddbd69885-gmd66" Dec 12 14:16:08 crc kubenswrapper[5108]: I1212 14:16:08.248390 5108 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 12 14:16:08 crc kubenswrapper[5108]: I1212 14:16:08.248405 5108 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 12 14:16:08 crc kubenswrapper[5108]: I1212 14:16:08.248420 5108 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 12 14:16:08 crc kubenswrapper[5108]: I1212 14:16:08.248433 5108 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 12 14:16:08 crc kubenswrapper[5108]: I1212 14:16:08.248446 5108 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 12 14:16:08 crc kubenswrapper[5108]: I1212 14:16:08.248459 5108 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 12 14:16:08 crc kubenswrapper[5108]: I1212 14:16:08.248470 5108 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 12 14:16:08 crc kubenswrapper[5108]: I1212 14:16:08.248481 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-s6s8q\" (UniqueName: \"kubernetes.io/projected/9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26-kube-api-access-s6s8q\") on node \"crc\" DevicePath \"\"" Dec 12 14:16:08 crc kubenswrapper[5108]: I1212 14:16:08.248493 5108 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 14:16:08 crc kubenswrapper[5108]: I1212 14:16:08.249055 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fea084c2-3c2c-4f83-84ad-fa54c63ebaf1-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6ddbd69885-gmd66\" (UID: \"fea084c2-3c2c-4f83-84ad-fa54c63ebaf1\") " pod="openshift-authentication/oauth-openshift-6ddbd69885-gmd66" Dec 12 14:16:08 crc kubenswrapper[5108]: I1212 14:16:08.249141 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/fea084c2-3c2c-4f83-84ad-fa54c63ebaf1-audit-policies\") pod \"oauth-openshift-6ddbd69885-gmd66\" (UID: \"fea084c2-3c2c-4f83-84ad-fa54c63ebaf1\") " pod="openshift-authentication/oauth-openshift-6ddbd69885-gmd66" Dec 12 14:16:08 crc kubenswrapper[5108]: I1212 14:16:08.249999 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fea084c2-3c2c-4f83-84ad-fa54c63ebaf1-audit-dir\") pod \"oauth-openshift-6ddbd69885-gmd66\" (UID: \"fea084c2-3c2c-4f83-84ad-fa54c63ebaf1\") " pod="openshift-authentication/oauth-openshift-6ddbd69885-gmd66" Dec 12 14:16:08 crc kubenswrapper[5108]: I1212 14:16:08.250428 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/fea084c2-3c2c-4f83-84ad-fa54c63ebaf1-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6ddbd69885-gmd66\" (UID: \"fea084c2-3c2c-4f83-84ad-fa54c63ebaf1\") " pod="openshift-authentication/oauth-openshift-6ddbd69885-gmd66" Dec 12 14:16:08 crc kubenswrapper[5108]: I1212 14:16:08.250599 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/fea084c2-3c2c-4f83-84ad-fa54c63ebaf1-v4-0-config-system-service-ca\") pod \"oauth-openshift-6ddbd69885-gmd66\" (UID: \"fea084c2-3c2c-4f83-84ad-fa54c63ebaf1\") " pod="openshift-authentication/oauth-openshift-6ddbd69885-gmd66" Dec 12 14:16:08 crc kubenswrapper[5108]: I1212 14:16:08.252115 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/fea084c2-3c2c-4f83-84ad-fa54c63ebaf1-v4-0-config-user-template-error\") pod \"oauth-openshift-6ddbd69885-gmd66\" (UID: \"fea084c2-3c2c-4f83-84ad-fa54c63ebaf1\") " pod="openshift-authentication/oauth-openshift-6ddbd69885-gmd66" Dec 12 14:16:08 crc kubenswrapper[5108]: I1212 14:16:08.252127 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/fea084c2-3c2c-4f83-84ad-fa54c63ebaf1-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6ddbd69885-gmd66\" (UID: \"fea084c2-3c2c-4f83-84ad-fa54c63ebaf1\") " pod="openshift-authentication/oauth-openshift-6ddbd69885-gmd66" Dec 12 14:16:08 crc kubenswrapper[5108]: I1212 14:16:08.252281 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/fea084c2-3c2c-4f83-84ad-fa54c63ebaf1-v4-0-config-user-template-login\") pod \"oauth-openshift-6ddbd69885-gmd66\" (UID: \"fea084c2-3c2c-4f83-84ad-fa54c63ebaf1\") " pod="openshift-authentication/oauth-openshift-6ddbd69885-gmd66" Dec 12 14:16:08 crc kubenswrapper[5108]: I1212 14:16:08.252695 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/fea084c2-3c2c-4f83-84ad-fa54c63ebaf1-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6ddbd69885-gmd66\" (UID: \"fea084c2-3c2c-4f83-84ad-fa54c63ebaf1\") " pod="openshift-authentication/oauth-openshift-6ddbd69885-gmd66" Dec 12 14:16:08 crc kubenswrapper[5108]: I1212 14:16:08.253182 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/fea084c2-3c2c-4f83-84ad-fa54c63ebaf1-v4-0-config-system-router-certs\") pod \"oauth-openshift-6ddbd69885-gmd66\" (UID: \"fea084c2-3c2c-4f83-84ad-fa54c63ebaf1\") " pod="openshift-authentication/oauth-openshift-6ddbd69885-gmd66" Dec 12 14:16:08 crc kubenswrapper[5108]: I1212 14:16:08.253507 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/fea084c2-3c2c-4f83-84ad-fa54c63ebaf1-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6ddbd69885-gmd66\" (UID: \"fea084c2-3c2c-4f83-84ad-fa54c63ebaf1\") " pod="openshift-authentication/oauth-openshift-6ddbd69885-gmd66" Dec 12 14:16:08 crc kubenswrapper[5108]: I1212 14:16:08.253935 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/fea084c2-3c2c-4f83-84ad-fa54c63ebaf1-v4-0-config-system-session\") pod \"oauth-openshift-6ddbd69885-gmd66\" (UID: \"fea084c2-3c2c-4f83-84ad-fa54c63ebaf1\") " pod="openshift-authentication/oauth-openshift-6ddbd69885-gmd66" Dec 12 14:16:08 crc kubenswrapper[5108]: I1212 14:16:08.254474 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/fea084c2-3c2c-4f83-84ad-fa54c63ebaf1-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6ddbd69885-gmd66\" (UID: \"fea084c2-3c2c-4f83-84ad-fa54c63ebaf1\") " pod="openshift-authentication/oauth-openshift-6ddbd69885-gmd66" Dec 12 14:16:08 crc kubenswrapper[5108]: I1212 14:16:08.266185 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xj4h8\" (UniqueName: \"kubernetes.io/projected/fea084c2-3c2c-4f83-84ad-fa54c63ebaf1-kube-api-access-xj4h8\") pod \"oauth-openshift-6ddbd69885-gmd66\" (UID: \"fea084c2-3c2c-4f83-84ad-fa54c63ebaf1\") " pod="openshift-authentication/oauth-openshift-6ddbd69885-gmd66" Dec 12 14:16:08 crc kubenswrapper[5108]: I1212 14:16:08.429463 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6ddbd69885-gmd66" Dec 12 14:16:08 crc kubenswrapper[5108]: I1212 14:16:08.701713 5108 generic.go:358] "Generic (PLEG): container finished" podID="9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26" containerID="2bdc3adf14961fcca4e288424a14d95f0a1dba6c91b73b18042b6a5bdd0429f1" exitCode=0 Dec 12 14:16:08 crc kubenswrapper[5108]: I1212 14:16:08.701766 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-blzxz" event={"ID":"9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26","Type":"ContainerDied","Data":"2bdc3adf14961fcca4e288424a14d95f0a1dba6c91b73b18042b6a5bdd0429f1"} Dec 12 14:16:08 crc kubenswrapper[5108]: I1212 14:16:08.701796 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-blzxz" event={"ID":"9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26","Type":"ContainerDied","Data":"e924e3462ad378699e8a705bef7b0a9f61f76711930516190cc813395f53179e"} Dec 12 14:16:08 crc kubenswrapper[5108]: I1212 14:16:08.701800 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-blzxz" Dec 12 14:16:08 crc kubenswrapper[5108]: I1212 14:16:08.701815 5108 scope.go:117] "RemoveContainer" containerID="2bdc3adf14961fcca4e288424a14d95f0a1dba6c91b73b18042b6a5bdd0429f1" Dec 12 14:16:08 crc kubenswrapper[5108]: I1212 14:16:08.724026 5108 scope.go:117] "RemoveContainer" containerID="2bdc3adf14961fcca4e288424a14d95f0a1dba6c91b73b18042b6a5bdd0429f1" Dec 12 14:16:08 crc kubenswrapper[5108]: E1212 14:16:08.724398 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2bdc3adf14961fcca4e288424a14d95f0a1dba6c91b73b18042b6a5bdd0429f1\": container with ID starting with 2bdc3adf14961fcca4e288424a14d95f0a1dba6c91b73b18042b6a5bdd0429f1 not found: ID does not exist" containerID="2bdc3adf14961fcca4e288424a14d95f0a1dba6c91b73b18042b6a5bdd0429f1" Dec 12 14:16:08 crc kubenswrapper[5108]: I1212 14:16:08.724423 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2bdc3adf14961fcca4e288424a14d95f0a1dba6c91b73b18042b6a5bdd0429f1"} err="failed to get container status \"2bdc3adf14961fcca4e288424a14d95f0a1dba6c91b73b18042b6a5bdd0429f1\": rpc error: code = NotFound desc = could not find container \"2bdc3adf14961fcca4e288424a14d95f0a1dba6c91b73b18042b6a5bdd0429f1\": container with ID starting with 2bdc3adf14961fcca4e288424a14d95f0a1dba6c91b73b18042b6a5bdd0429f1 not found: ID does not exist" Dec 12 14:16:08 crc kubenswrapper[5108]: I1212 14:16:08.732267 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-blzxz"] Dec 12 14:16:08 crc kubenswrapper[5108]: I1212 14:16:08.736546 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-blzxz"] Dec 12 14:16:08 crc kubenswrapper[5108]: I1212 14:16:08.830019 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6ddbd69885-gmd66"] Dec 12 14:16:08 crc kubenswrapper[5108]: W1212 14:16:08.836232 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfea084c2_3c2c_4f83_84ad_fa54c63ebaf1.slice/crio-1834e381f75904bf24e645173ff930770fc794ba3b3804ff3cefef930df6ff97 WatchSource:0}: Error finding container 1834e381f75904bf24e645173ff930770fc794ba3b3804ff3cefef930df6ff97: Status 404 returned error can't find the container with id 1834e381f75904bf24e645173ff930770fc794ba3b3804ff3cefef930df6ff97 Dec 12 14:16:09 crc kubenswrapper[5108]: I1212 14:16:09.415387 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26" path="/var/lib/kubelet/pods/9d8f11bf-4b9f-4ba0-b9b6-8299e55e8a26/volumes" Dec 12 14:16:09 crc kubenswrapper[5108]: I1212 14:16:09.708970 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6ddbd69885-gmd66" event={"ID":"fea084c2-3c2c-4f83-84ad-fa54c63ebaf1","Type":"ContainerStarted","Data":"e91bc56759ca04b91f2dfca584fb7be41362f891a03e1971239048874e35097d"} Dec 12 14:16:09 crc kubenswrapper[5108]: I1212 14:16:09.709026 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6ddbd69885-gmd66" event={"ID":"fea084c2-3c2c-4f83-84ad-fa54c63ebaf1","Type":"ContainerStarted","Data":"1834e381f75904bf24e645173ff930770fc794ba3b3804ff3cefef930df6ff97"} Dec 12 14:16:09 crc kubenswrapper[5108]: I1212 14:16:09.709373 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-authentication/oauth-openshift-6ddbd69885-gmd66" Dec 12 14:16:09 crc kubenswrapper[5108]: I1212 14:16:09.733011 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-6ddbd69885-gmd66" podStartSLOduration=27.732990856 podStartE2EDuration="27.732990856s" podCreationTimestamp="2025-12-12 14:15:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 14:16:09.730497819 +0000 UTC m=+322.638488978" watchObservedRunningTime="2025-12-12 14:16:09.732990856 +0000 UTC m=+322.640982015" Dec 12 14:16:09 crc kubenswrapper[5108]: I1212 14:16:09.806363 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-6ddbd69885-gmd66" Dec 12 14:18:19 crc kubenswrapper[5108]: I1212 14:18:19.986479 5108 patch_prober.go:28] interesting pod/machine-config-daemon-w294k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 14:18:19 crc kubenswrapper[5108]: I1212 14:18:19.987208 5108 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w294k" podUID="fcb30c12-8b29-461d-ab3e-a76577b664d6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 14:18:49 crc kubenswrapper[5108]: I1212 14:18:49.986853 5108 patch_prober.go:28] interesting pod/machine-config-daemon-w294k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 14:18:49 crc kubenswrapper[5108]: I1212 14:18:49.987888 5108 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w294k" podUID="fcb30c12-8b29-461d-ab3e-a76577b664d6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 14:19:19 crc kubenswrapper[5108]: I1212 14:19:19.986761 5108 patch_prober.go:28] interesting pod/machine-config-daemon-w294k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 14:19:19 crc kubenswrapper[5108]: I1212 14:19:19.987220 5108 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w294k" podUID="fcb30c12-8b29-461d-ab3e-a76577b664d6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 14:19:19 crc kubenswrapper[5108]: I1212 14:19:19.987288 5108 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-w294k" Dec 12 14:19:19 crc kubenswrapper[5108]: I1212 14:19:19.989194 5108 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7adea82340555f4f84d2706f521506fd50722f67e31c512652c717dcaa73a33c"} pod="openshift-machine-config-operator/machine-config-daemon-w294k" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 12 14:19:19 crc kubenswrapper[5108]: I1212 14:19:19.989331 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-w294k" podUID="fcb30c12-8b29-461d-ab3e-a76577b664d6" containerName="machine-config-daemon" containerID="cri-o://7adea82340555f4f84d2706f521506fd50722f67e31c512652c717dcaa73a33c" gracePeriod=600 Dec 12 14:19:20 crc kubenswrapper[5108]: I1212 14:19:20.856543 5108 generic.go:358] "Generic (PLEG): container finished" podID="fcb30c12-8b29-461d-ab3e-a76577b664d6" containerID="7adea82340555f4f84d2706f521506fd50722f67e31c512652c717dcaa73a33c" exitCode=0 Dec 12 14:19:20 crc kubenswrapper[5108]: I1212 14:19:20.856696 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w294k" event={"ID":"fcb30c12-8b29-461d-ab3e-a76577b664d6","Type":"ContainerDied","Data":"7adea82340555f4f84d2706f521506fd50722f67e31c512652c717dcaa73a33c"} Dec 12 14:19:20 crc kubenswrapper[5108]: I1212 14:19:20.857068 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w294k" event={"ID":"fcb30c12-8b29-461d-ab3e-a76577b664d6","Type":"ContainerStarted","Data":"f71b57f29e2ff270e6b601b2c583e9205b2e1619208bf2a39737e6a39c51a2f1"} Dec 12 14:19:20 crc kubenswrapper[5108]: I1212 14:19:20.857114 5108 scope.go:117] "RemoveContainer" containerID="59bb44262fa109c767656d9eb9c0c339275e7515ac478a28e10f263d2cb3e961" Dec 12 14:20:47 crc kubenswrapper[5108]: I1212 14:20:47.782879 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Dec 12 14:20:47 crc kubenswrapper[5108]: I1212 14:20:47.787195 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Dec 12 14:20:56 crc kubenswrapper[5108]: I1212 14:20:56.021017 5108 ???:1] "http: TLS handshake error from 192.168.126.11:51700: no serving certificate available for the kubelet" Dec 12 14:21:45 crc kubenswrapper[5108]: I1212 14:21:45.676129 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-wzxz4"] Dec 12 14:21:45 crc kubenswrapper[5108]: I1212 14:21:45.676983 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-wzxz4" podUID="d8603a7b-d127-481c-8901-fff3b6f9f38b" containerName="kube-rbac-proxy" containerID="cri-o://6d897d8d478213848ac8c364ee045dfc503fb7a8517a27b05fdaa982f98bf324" gracePeriod=30 Dec 12 14:21:45 crc kubenswrapper[5108]: I1212 14:21:45.677133 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-wzxz4" podUID="d8603a7b-d127-481c-8901-fff3b6f9f38b" containerName="ovnkube-cluster-manager" containerID="cri-o://9fcd25dad249b60e593ec6771cdadb3e233d578960b6b0a5899598511e877b98" gracePeriod=30 Dec 12 14:21:45 crc kubenswrapper[5108]: I1212 14:21:45.883663 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-69wzc"] Dec 12 14:21:45 crc kubenswrapper[5108]: I1212 14:21:45.884822 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-69wzc" podUID="934d8f16-46da-4779-8ab8-31b05d1e8b5c" containerName="ovn-controller" containerID="cri-o://9da4bf297887a716ed638824bbce5aca0592ab7354dff37269b576a4154f6b66" gracePeriod=30 Dec 12 14:21:45 crc kubenswrapper[5108]: I1212 14:21:45.884974 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-69wzc" podUID="934d8f16-46da-4779-8ab8-31b05d1e8b5c" containerName="kube-rbac-proxy-node" containerID="cri-o://a1f681c1c61bf023f01cbca01e489ba9853462e7471cc85cc24e1b5da86096ea" gracePeriod=30 Dec 12 14:21:45 crc kubenswrapper[5108]: I1212 14:21:45.885055 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-69wzc" podUID="934d8f16-46da-4779-8ab8-31b05d1e8b5c" containerName="ovn-acl-logging" containerID="cri-o://9919b79275f59aff26b0acffc3954a149d74c9173a5c44d77512934a99cadd03" gracePeriod=30 Dec 12 14:21:45 crc kubenswrapper[5108]: I1212 14:21:45.884950 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-69wzc" podUID="934d8f16-46da-4779-8ab8-31b05d1e8b5c" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://bcfb8a5acb80dea15b10468780de99a6fb687ef49e693d7fb552ed187b78607b" gracePeriod=30 Dec 12 14:21:45 crc kubenswrapper[5108]: I1212 14:21:45.884924 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-69wzc" podUID="934d8f16-46da-4779-8ab8-31b05d1e8b5c" containerName="sbdb" containerID="cri-o://e77382ed2a634eba38b927f4046daeb8627465aaa3f0f1328f36300bd391925d" gracePeriod=30 Dec 12 14:21:45 crc kubenswrapper[5108]: I1212 14:21:45.884963 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-69wzc" podUID="934d8f16-46da-4779-8ab8-31b05d1e8b5c" containerName="northd" containerID="cri-o://f941cb5e5a8e0f562bf1274b00288a3e58fe27459711c3e231201377c4cb7a10" gracePeriod=30 Dec 12 14:21:45 crc kubenswrapper[5108]: I1212 14:21:45.884898 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-69wzc" podUID="934d8f16-46da-4779-8ab8-31b05d1e8b5c" containerName="nbdb" containerID="cri-o://5615ed6026dc7cc3d5c646cc273ee282bf8790a71ae4a50ea8a8067550bf067f" gracePeriod=30 Dec 12 14:21:45 crc kubenswrapper[5108]: I1212 14:21:45.909184 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-69wzc" podUID="934d8f16-46da-4779-8ab8-31b05d1e8b5c" containerName="ovnkube-controller" containerID="cri-o://3904bdd05696ca809605e7ff25066a563efbdf5d7e944cc4cc56b32b255f428e" gracePeriod=30 Dec 12 14:21:46 crc kubenswrapper[5108]: I1212 14:21:46.711001 5108 generic.go:358] "Generic (PLEG): container finished" podID="d8603a7b-d127-481c-8901-fff3b6f9f38b" containerID="6d897d8d478213848ac8c364ee045dfc503fb7a8517a27b05fdaa982f98bf324" exitCode=0 Dec 12 14:21:46 crc kubenswrapper[5108]: I1212 14:21:46.711134 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-wzxz4" event={"ID":"d8603a7b-d127-481c-8901-fff3b6f9f38b","Type":"ContainerDied","Data":"6d897d8d478213848ac8c364ee045dfc503fb7a8517a27b05fdaa982f98bf324"} Dec 12 14:21:46 crc kubenswrapper[5108]: I1212 14:21:46.716096 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-69wzc_934d8f16-46da-4779-8ab8-31b05d1e8b5c/ovn-acl-logging/0.log" Dec 12 14:21:46 crc kubenswrapper[5108]: I1212 14:21:46.716822 5108 generic.go:358] "Generic (PLEG): container finished" podID="934d8f16-46da-4779-8ab8-31b05d1e8b5c" containerID="9919b79275f59aff26b0acffc3954a149d74c9173a5c44d77512934a99cadd03" exitCode=143 Dec 12 14:21:46 crc kubenswrapper[5108]: I1212 14:21:46.716866 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69wzc" event={"ID":"934d8f16-46da-4779-8ab8-31b05d1e8b5c","Type":"ContainerDied","Data":"9919b79275f59aff26b0acffc3954a149d74c9173a5c44d77512934a99cadd03"} Dec 12 14:21:47 crc kubenswrapper[5108]: I1212 14:21:47.706999 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-wzxz4" Dec 12 14:21:47 crc kubenswrapper[5108]: I1212 14:21:47.735041 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-69wzc_934d8f16-46da-4779-8ab8-31b05d1e8b5c/ovn-acl-logging/0.log" Dec 12 14:21:47 crc kubenswrapper[5108]: I1212 14:21:47.735222 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-h6kjx"] Dec 12 14:21:47 crc kubenswrapper[5108]: I1212 14:21:47.735903 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d8603a7b-d127-481c-8901-fff3b6f9f38b" containerName="kube-rbac-proxy" Dec 12 14:21:47 crc kubenswrapper[5108]: I1212 14:21:47.735923 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8603a7b-d127-481c-8901-fff3b6f9f38b" containerName="kube-rbac-proxy" Dec 12 14:21:47 crc kubenswrapper[5108]: I1212 14:21:47.735945 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d8603a7b-d127-481c-8901-fff3b6f9f38b" containerName="ovnkube-cluster-manager" Dec 12 14:21:47 crc kubenswrapper[5108]: I1212 14:21:47.735951 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8603a7b-d127-481c-8901-fff3b6f9f38b" containerName="ovnkube-cluster-manager" Dec 12 14:21:47 crc kubenswrapper[5108]: I1212 14:21:47.736044 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="d8603a7b-d127-481c-8901-fff3b6f9f38b" containerName="ovnkube-cluster-manager" Dec 12 14:21:47 crc kubenswrapper[5108]: I1212 14:21:47.736056 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="d8603a7b-d127-481c-8901-fff3b6f9f38b" containerName="kube-rbac-proxy" Dec 12 14:21:47 crc kubenswrapper[5108]: I1212 14:21:47.736171 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-69wzc_934d8f16-46da-4779-8ab8-31b05d1e8b5c/ovn-controller/0.log" Dec 12 14:21:47 crc kubenswrapper[5108]: I1212 14:21:47.736769 5108 generic.go:358] "Generic (PLEG): container finished" podID="934d8f16-46da-4779-8ab8-31b05d1e8b5c" containerID="3904bdd05696ca809605e7ff25066a563efbdf5d7e944cc4cc56b32b255f428e" exitCode=0 Dec 12 14:21:47 crc kubenswrapper[5108]: I1212 14:21:47.736798 5108 generic.go:358] "Generic (PLEG): container finished" podID="934d8f16-46da-4779-8ab8-31b05d1e8b5c" containerID="e77382ed2a634eba38b927f4046daeb8627465aaa3f0f1328f36300bd391925d" exitCode=0 Dec 12 14:21:47 crc kubenswrapper[5108]: I1212 14:21:47.736807 5108 generic.go:358] "Generic (PLEG): container finished" podID="934d8f16-46da-4779-8ab8-31b05d1e8b5c" containerID="5615ed6026dc7cc3d5c646cc273ee282bf8790a71ae4a50ea8a8067550bf067f" exitCode=0 Dec 12 14:21:47 crc kubenswrapper[5108]: I1212 14:21:47.736814 5108 generic.go:358] "Generic (PLEG): container finished" podID="934d8f16-46da-4779-8ab8-31b05d1e8b5c" containerID="f941cb5e5a8e0f562bf1274b00288a3e58fe27459711c3e231201377c4cb7a10" exitCode=0 Dec 12 14:21:47 crc kubenswrapper[5108]: I1212 14:21:47.736821 5108 generic.go:358] "Generic (PLEG): container finished" podID="934d8f16-46da-4779-8ab8-31b05d1e8b5c" containerID="bcfb8a5acb80dea15b10468780de99a6fb687ef49e693d7fb552ed187b78607b" exitCode=0 Dec 12 14:21:47 crc kubenswrapper[5108]: I1212 14:21:47.736830 5108 generic.go:358] "Generic (PLEG): container finished" podID="934d8f16-46da-4779-8ab8-31b05d1e8b5c" containerID="a1f681c1c61bf023f01cbca01e489ba9853462e7471cc85cc24e1b5da86096ea" exitCode=0 Dec 12 14:21:47 crc kubenswrapper[5108]: I1212 14:21:47.736836 5108 generic.go:358] "Generic (PLEG): container finished" podID="934d8f16-46da-4779-8ab8-31b05d1e8b5c" containerID="9da4bf297887a716ed638824bbce5aca0592ab7354dff37269b576a4154f6b66" exitCode=143 Dec 12 14:21:47 crc kubenswrapper[5108]: I1212 14:21:47.740422 5108 generic.go:358] "Generic (PLEG): container finished" podID="d8603a7b-d127-481c-8901-fff3b6f9f38b" containerID="9fcd25dad249b60e593ec6771cdadb3e233d578960b6b0a5899598511e877b98" exitCode=0 Dec 12 14:21:47 crc kubenswrapper[5108]: I1212 14:21:47.740761 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69wzc" event={"ID":"934d8f16-46da-4779-8ab8-31b05d1e8b5c","Type":"ContainerDied","Data":"3904bdd05696ca809605e7ff25066a563efbdf5d7e944cc4cc56b32b255f428e"} Dec 12 14:21:47 crc kubenswrapper[5108]: I1212 14:21:47.741490 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69wzc" event={"ID":"934d8f16-46da-4779-8ab8-31b05d1e8b5c","Type":"ContainerDied","Data":"e77382ed2a634eba38b927f4046daeb8627465aaa3f0f1328f36300bd391925d"} Dec 12 14:21:47 crc kubenswrapper[5108]: I1212 14:21:47.740889 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-wzxz4" Dec 12 14:21:47 crc kubenswrapper[5108]: I1212 14:21:47.740989 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-h6kjx" Dec 12 14:21:47 crc kubenswrapper[5108]: I1212 14:21:47.742099 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d8603a7b-d127-481c-8901-fff3b6f9f38b-ovn-control-plane-metrics-cert\") pod \"d8603a7b-d127-481c-8901-fff3b6f9f38b\" (UID: \"d8603a7b-d127-481c-8901-fff3b6f9f38b\") " Dec 12 14:21:47 crc kubenswrapper[5108]: I1212 14:21:47.742191 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d8603a7b-d127-481c-8901-fff3b6f9f38b-env-overrides\") pod \"d8603a7b-d127-481c-8901-fff3b6f9f38b\" (UID: \"d8603a7b-d127-481c-8901-fff3b6f9f38b\") " Dec 12 14:21:47 crc kubenswrapper[5108]: I1212 14:21:47.742304 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7tz82\" (UniqueName: \"kubernetes.io/projected/d8603a7b-d127-481c-8901-fff3b6f9f38b-kube-api-access-7tz82\") pod \"d8603a7b-d127-481c-8901-fff3b6f9f38b\" (UID: \"d8603a7b-d127-481c-8901-fff3b6f9f38b\") " Dec 12 14:21:47 crc kubenswrapper[5108]: I1212 14:21:47.742477 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d8603a7b-d127-481c-8901-fff3b6f9f38b-ovnkube-config\") pod \"d8603a7b-d127-481c-8901-fff3b6f9f38b\" (UID: \"d8603a7b-d127-481c-8901-fff3b6f9f38b\") " Dec 12 14:21:47 crc kubenswrapper[5108]: I1212 14:21:47.742550 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69wzc" event={"ID":"934d8f16-46da-4779-8ab8-31b05d1e8b5c","Type":"ContainerDied","Data":"5615ed6026dc7cc3d5c646cc273ee282bf8790a71ae4a50ea8a8067550bf067f"} Dec 12 14:21:47 crc kubenswrapper[5108]: I1212 14:21:47.742570 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69wzc" event={"ID":"934d8f16-46da-4779-8ab8-31b05d1e8b5c","Type":"ContainerDied","Data":"f941cb5e5a8e0f562bf1274b00288a3e58fe27459711c3e231201377c4cb7a10"} Dec 12 14:21:47 crc kubenswrapper[5108]: I1212 14:21:47.742579 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69wzc" event={"ID":"934d8f16-46da-4779-8ab8-31b05d1e8b5c","Type":"ContainerDied","Data":"bcfb8a5acb80dea15b10468780de99a6fb687ef49e693d7fb552ed187b78607b"} Dec 12 14:21:47 crc kubenswrapper[5108]: I1212 14:21:47.742588 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69wzc" event={"ID":"934d8f16-46da-4779-8ab8-31b05d1e8b5c","Type":"ContainerDied","Data":"a1f681c1c61bf023f01cbca01e489ba9853462e7471cc85cc24e1b5da86096ea"} Dec 12 14:21:47 crc kubenswrapper[5108]: I1212 14:21:47.742596 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69wzc" event={"ID":"934d8f16-46da-4779-8ab8-31b05d1e8b5c","Type":"ContainerDied","Data":"9da4bf297887a716ed638824bbce5aca0592ab7354dff37269b576a4154f6b66"} Dec 12 14:21:47 crc kubenswrapper[5108]: I1212 14:21:47.742605 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-wzxz4" event={"ID":"d8603a7b-d127-481c-8901-fff3b6f9f38b","Type":"ContainerDied","Data":"9fcd25dad249b60e593ec6771cdadb3e233d578960b6b0a5899598511e877b98"} Dec 12 14:21:47 crc kubenswrapper[5108]: I1212 14:21:47.742652 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-wzxz4" event={"ID":"d8603a7b-d127-481c-8901-fff3b6f9f38b","Type":"ContainerDied","Data":"18c74d41b1790e55c6779339ddb0156f97c829114e50a66c9954e0eedd31b330"} Dec 12 14:21:47 crc kubenswrapper[5108]: I1212 14:21:47.742669 5108 scope.go:117] "RemoveContainer" containerID="9fcd25dad249b60e593ec6771cdadb3e233d578960b6b0a5899598511e877b98" Dec 12 14:21:47 crc kubenswrapper[5108]: I1212 14:21:47.743378 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8603a7b-d127-481c-8901-fff3b6f9f38b-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "d8603a7b-d127-481c-8901-fff3b6f9f38b" (UID: "d8603a7b-d127-481c-8901-fff3b6f9f38b"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:21:47 crc kubenswrapper[5108]: I1212 14:21:47.743463 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8603a7b-d127-481c-8901-fff3b6f9f38b-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "d8603a7b-d127-481c-8901-fff3b6f9f38b" (UID: "d8603a7b-d127-481c-8901-fff3b6f9f38b"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:21:47 crc kubenswrapper[5108]: I1212 14:21:47.743638 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ztpws_1e8c3045-7200-4b39-9531-5ce86ab0b5b5/kube-multus/0.log" Dec 12 14:21:47 crc kubenswrapper[5108]: I1212 14:21:47.743687 5108 generic.go:358] "Generic (PLEG): container finished" podID="1e8c3045-7200-4b39-9531-5ce86ab0b5b5" containerID="ac28c2e3a31b1607275402b0d718319be640cc0e29653600c0bb3bfe498f42ff" exitCode=2 Dec 12 14:21:47 crc kubenswrapper[5108]: I1212 14:21:47.743761 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-ztpws" event={"ID":"1e8c3045-7200-4b39-9531-5ce86ab0b5b5","Type":"ContainerDied","Data":"ac28c2e3a31b1607275402b0d718319be640cc0e29653600c0bb3bfe498f42ff"} Dec 12 14:21:47 crc kubenswrapper[5108]: I1212 14:21:47.744338 5108 scope.go:117] "RemoveContainer" containerID="ac28c2e3a31b1607275402b0d718319be640cc0e29653600c0bb3bfe498f42ff" Dec 12 14:21:47 crc kubenswrapper[5108]: I1212 14:21:47.745489 5108 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 12 14:21:47 crc kubenswrapper[5108]: I1212 14:21:47.753549 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8603a7b-d127-481c-8901-fff3b6f9f38b-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "d8603a7b-d127-481c-8901-fff3b6f9f38b" (UID: "d8603a7b-d127-481c-8901-fff3b6f9f38b"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:21:47 crc kubenswrapper[5108]: I1212 14:21:47.755980 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8603a7b-d127-481c-8901-fff3b6f9f38b-kube-api-access-7tz82" (OuterVolumeSpecName: "kube-api-access-7tz82") pod "d8603a7b-d127-481c-8901-fff3b6f9f38b" (UID: "d8603a7b-d127-481c-8901-fff3b6f9f38b"). InnerVolumeSpecName "kube-api-access-7tz82". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:21:47 crc kubenswrapper[5108]: I1212 14:21:47.768126 5108 scope.go:117] "RemoveContainer" containerID="6d897d8d478213848ac8c364ee045dfc503fb7a8517a27b05fdaa982f98bf324" Dec 12 14:21:47 crc kubenswrapper[5108]: I1212 14:21:47.776958 5108 scope.go:117] "RemoveContainer" containerID="6d897d8d478213848ac8c364ee045dfc503fb7a8517a27b05fdaa982f98bf324" Dec 12 14:21:47 crc kubenswrapper[5108]: I1212 14:21:47.789250 5108 scope.go:117] "RemoveContainer" containerID="9fcd25dad249b60e593ec6771cdadb3e233d578960b6b0a5899598511e877b98" Dec 12 14:21:47 crc kubenswrapper[5108]: E1212 14:21:47.790577 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9fcd25dad249b60e593ec6771cdadb3e233d578960b6b0a5899598511e877b98\": container with ID starting with 9fcd25dad249b60e593ec6771cdadb3e233d578960b6b0a5899598511e877b98 not found: ID does not exist" containerID="9fcd25dad249b60e593ec6771cdadb3e233d578960b6b0a5899598511e877b98" Dec 12 14:21:47 crc kubenswrapper[5108]: I1212 14:21:47.790633 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9fcd25dad249b60e593ec6771cdadb3e233d578960b6b0a5899598511e877b98"} err="failed to get container status \"9fcd25dad249b60e593ec6771cdadb3e233d578960b6b0a5899598511e877b98\": rpc error: code = NotFound desc = could not find container \"9fcd25dad249b60e593ec6771cdadb3e233d578960b6b0a5899598511e877b98\": container with ID starting with 9fcd25dad249b60e593ec6771cdadb3e233d578960b6b0a5899598511e877b98 not found: ID does not exist" Dec 12 14:21:47 crc kubenswrapper[5108]: I1212 14:21:47.790691 5108 scope.go:117] "RemoveContainer" containerID="6d897d8d478213848ac8c364ee045dfc503fb7a8517a27b05fdaa982f98bf324" Dec 12 14:21:47 crc kubenswrapper[5108]: E1212 14:21:47.790731 5108 log.go:32] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to delete container k8s_kube-rbac-proxy_ovnkube-control-plane-57b78d8988-wzxz4_openshift-ovn-kubernetes_d8603a7b-d127-481c-8901-fff3b6f9f38b_0 in pod sandbox 18c74d41b1790e55c6779339ddb0156f97c829114e50a66c9954e0eedd31b330 from index: no such id: '6d897d8d478213848ac8c364ee045dfc503fb7a8517a27b05fdaa982f98bf324'" containerID="6d897d8d478213848ac8c364ee045dfc503fb7a8517a27b05fdaa982f98bf324" Dec 12 14:21:47 crc kubenswrapper[5108]: E1212 14:21:47.790800 5108 kuberuntime_gc.go:150] "Failed to remove container" err="rpc error: code = Unknown desc = failed to delete container k8s_kube-rbac-proxy_ovnkube-control-plane-57b78d8988-wzxz4_openshift-ovn-kubernetes_d8603a7b-d127-481c-8901-fff3b6f9f38b_0 in pod sandbox 18c74d41b1790e55c6779339ddb0156f97c829114e50a66c9954e0eedd31b330 from index: no such id: '6d897d8d478213848ac8c364ee045dfc503fb7a8517a27b05fdaa982f98bf324'" containerID="6d897d8d478213848ac8c364ee045dfc503fb7a8517a27b05fdaa982f98bf324" Dec 12 14:21:47 crc kubenswrapper[5108]: E1212 14:21:47.791253 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d897d8d478213848ac8c364ee045dfc503fb7a8517a27b05fdaa982f98bf324\": container with ID starting with 6d897d8d478213848ac8c364ee045dfc503fb7a8517a27b05fdaa982f98bf324 not found: ID does not exist" containerID="6d897d8d478213848ac8c364ee045dfc503fb7a8517a27b05fdaa982f98bf324" Dec 12 14:21:47 crc kubenswrapper[5108]: I1212 14:21:47.791306 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d897d8d478213848ac8c364ee045dfc503fb7a8517a27b05fdaa982f98bf324"} err="failed to get container status \"6d897d8d478213848ac8c364ee045dfc503fb7a8517a27b05fdaa982f98bf324\": rpc error: code = NotFound desc = could not find container \"6d897d8d478213848ac8c364ee045dfc503fb7a8517a27b05fdaa982f98bf324\": container with ID starting with 6d897d8d478213848ac8c364ee045dfc503fb7a8517a27b05fdaa982f98bf324 not found: ID does not exist" Dec 12 14:21:47 crc kubenswrapper[5108]: I1212 14:21:47.849728 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/30ffcbc2-2312-46dd-9996-465d07f8e07c-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-h6kjx\" (UID: \"30ffcbc2-2312-46dd-9996-465d07f8e07c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-h6kjx" Dec 12 14:21:47 crc kubenswrapper[5108]: I1212 14:21:47.849842 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/30ffcbc2-2312-46dd-9996-465d07f8e07c-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-h6kjx\" (UID: \"30ffcbc2-2312-46dd-9996-465d07f8e07c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-h6kjx" Dec 12 14:21:47 crc kubenswrapper[5108]: I1212 14:21:47.849874 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/30ffcbc2-2312-46dd-9996-465d07f8e07c-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-h6kjx\" (UID: \"30ffcbc2-2312-46dd-9996-465d07f8e07c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-h6kjx" Dec 12 14:21:47 crc kubenswrapper[5108]: I1212 14:21:47.849908 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8btg\" (UniqueName: \"kubernetes.io/projected/30ffcbc2-2312-46dd-9996-465d07f8e07c-kube-api-access-b8btg\") pod \"ovnkube-control-plane-97c9b6c48-h6kjx\" (UID: \"30ffcbc2-2312-46dd-9996-465d07f8e07c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-h6kjx" Dec 12 14:21:47 crc kubenswrapper[5108]: I1212 14:21:47.849994 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7tz82\" (UniqueName: \"kubernetes.io/projected/d8603a7b-d127-481c-8901-fff3b6f9f38b-kube-api-access-7tz82\") on node \"crc\" DevicePath \"\"" Dec 12 14:21:47 crc kubenswrapper[5108]: I1212 14:21:47.850007 5108 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d8603a7b-d127-481c-8901-fff3b6f9f38b-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 12 14:21:47 crc kubenswrapper[5108]: I1212 14:21:47.850017 5108 reconciler_common.go:299] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d8603a7b-d127-481c-8901-fff3b6f9f38b-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 12 14:21:47 crc kubenswrapper[5108]: I1212 14:21:47.850027 5108 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d8603a7b-d127-481c-8901-fff3b6f9f38b-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 12 14:21:47 crc kubenswrapper[5108]: I1212 14:21:47.951149 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/30ffcbc2-2312-46dd-9996-465d07f8e07c-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-h6kjx\" (UID: \"30ffcbc2-2312-46dd-9996-465d07f8e07c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-h6kjx" Dec 12 14:21:47 crc kubenswrapper[5108]: I1212 14:21:47.951232 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/30ffcbc2-2312-46dd-9996-465d07f8e07c-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-h6kjx\" (UID: \"30ffcbc2-2312-46dd-9996-465d07f8e07c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-h6kjx" Dec 12 14:21:47 crc kubenswrapper[5108]: I1212 14:21:47.951259 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/30ffcbc2-2312-46dd-9996-465d07f8e07c-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-h6kjx\" (UID: \"30ffcbc2-2312-46dd-9996-465d07f8e07c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-h6kjx" Dec 12 14:21:47 crc kubenswrapper[5108]: I1212 14:21:47.951284 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-b8btg\" (UniqueName: \"kubernetes.io/projected/30ffcbc2-2312-46dd-9996-465d07f8e07c-kube-api-access-b8btg\") pod \"ovnkube-control-plane-97c9b6c48-h6kjx\" (UID: \"30ffcbc2-2312-46dd-9996-465d07f8e07c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-h6kjx" Dec 12 14:21:47 crc kubenswrapper[5108]: I1212 14:21:47.953449 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/30ffcbc2-2312-46dd-9996-465d07f8e07c-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-h6kjx\" (UID: \"30ffcbc2-2312-46dd-9996-465d07f8e07c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-h6kjx" Dec 12 14:21:47 crc kubenswrapper[5108]: I1212 14:21:47.954048 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/30ffcbc2-2312-46dd-9996-465d07f8e07c-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-h6kjx\" (UID: \"30ffcbc2-2312-46dd-9996-465d07f8e07c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-h6kjx" Dec 12 14:21:47 crc kubenswrapper[5108]: I1212 14:21:47.958469 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/30ffcbc2-2312-46dd-9996-465d07f8e07c-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-h6kjx\" (UID: \"30ffcbc2-2312-46dd-9996-465d07f8e07c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-h6kjx" Dec 12 14:21:47 crc kubenswrapper[5108]: I1212 14:21:47.972580 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8btg\" (UniqueName: \"kubernetes.io/projected/30ffcbc2-2312-46dd-9996-465d07f8e07c-kube-api-access-b8btg\") pod \"ovnkube-control-plane-97c9b6c48-h6kjx\" (UID: \"30ffcbc2-2312-46dd-9996-465d07f8e07c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-h6kjx" Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.008409 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-69wzc_934d8f16-46da-4779-8ab8-31b05d1e8b5c/ovn-acl-logging/0.log" Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.009138 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-69wzc_934d8f16-46da-4779-8ab8-31b05d1e8b5c/ovn-controller/0.log" Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.009727 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-69wzc" Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.052023 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/934d8f16-46da-4779-8ab8-31b05d1e8b5c-var-lib-openvswitch\") pod \"934d8f16-46da-4779-8ab8-31b05d1e8b5c\" (UID: \"934d8f16-46da-4779-8ab8-31b05d1e8b5c\") " Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.052096 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/934d8f16-46da-4779-8ab8-31b05d1e8b5c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"934d8f16-46da-4779-8ab8-31b05d1e8b5c\" (UID: \"934d8f16-46da-4779-8ab8-31b05d1e8b5c\") " Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.052132 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/934d8f16-46da-4779-8ab8-31b05d1e8b5c-ovn-node-metrics-cert\") pod \"934d8f16-46da-4779-8ab8-31b05d1e8b5c\" (UID: \"934d8f16-46da-4779-8ab8-31b05d1e8b5c\") " Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.052162 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/934d8f16-46da-4779-8ab8-31b05d1e8b5c-log-socket\") pod \"934d8f16-46da-4779-8ab8-31b05d1e8b5c\" (UID: \"934d8f16-46da-4779-8ab8-31b05d1e8b5c\") " Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.052185 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qw6q8\" (UniqueName: \"kubernetes.io/projected/934d8f16-46da-4779-8ab8-31b05d1e8b5c-kube-api-access-qw6q8\") pod \"934d8f16-46da-4779-8ab8-31b05d1e8b5c\" (UID: \"934d8f16-46da-4779-8ab8-31b05d1e8b5c\") " Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.052219 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/934d8f16-46da-4779-8ab8-31b05d1e8b5c-node-log\") pod \"934d8f16-46da-4779-8ab8-31b05d1e8b5c\" (UID: \"934d8f16-46da-4779-8ab8-31b05d1e8b5c\") " Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.052231 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/934d8f16-46da-4779-8ab8-31b05d1e8b5c-run-ovn\") pod \"934d8f16-46da-4779-8ab8-31b05d1e8b5c\" (UID: \"934d8f16-46da-4779-8ab8-31b05d1e8b5c\") " Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.052260 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/934d8f16-46da-4779-8ab8-31b05d1e8b5c-host-run-netns\") pod \"934d8f16-46da-4779-8ab8-31b05d1e8b5c\" (UID: \"934d8f16-46da-4779-8ab8-31b05d1e8b5c\") " Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.052284 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/934d8f16-46da-4779-8ab8-31b05d1e8b5c-env-overrides\") pod \"934d8f16-46da-4779-8ab8-31b05d1e8b5c\" (UID: \"934d8f16-46da-4779-8ab8-31b05d1e8b5c\") " Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.052307 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/934d8f16-46da-4779-8ab8-31b05d1e8b5c-systemd-units\") pod \"934d8f16-46da-4779-8ab8-31b05d1e8b5c\" (UID: \"934d8f16-46da-4779-8ab8-31b05d1e8b5c\") " Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.052332 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/934d8f16-46da-4779-8ab8-31b05d1e8b5c-run-openvswitch\") pod \"934d8f16-46da-4779-8ab8-31b05d1e8b5c\" (UID: \"934d8f16-46da-4779-8ab8-31b05d1e8b5c\") " Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.052345 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/934d8f16-46da-4779-8ab8-31b05d1e8b5c-host-slash\") pod \"934d8f16-46da-4779-8ab8-31b05d1e8b5c\" (UID: \"934d8f16-46da-4779-8ab8-31b05d1e8b5c\") " Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.052360 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/934d8f16-46da-4779-8ab8-31b05d1e8b5c-ovnkube-script-lib\") pod \"934d8f16-46da-4779-8ab8-31b05d1e8b5c\" (UID: \"934d8f16-46da-4779-8ab8-31b05d1e8b5c\") " Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.052392 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/934d8f16-46da-4779-8ab8-31b05d1e8b5c-host-run-ovn-kubernetes\") pod \"934d8f16-46da-4779-8ab8-31b05d1e8b5c\" (UID: \"934d8f16-46da-4779-8ab8-31b05d1e8b5c\") " Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.052407 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/934d8f16-46da-4779-8ab8-31b05d1e8b5c-ovnkube-config\") pod \"934d8f16-46da-4779-8ab8-31b05d1e8b5c\" (UID: \"934d8f16-46da-4779-8ab8-31b05d1e8b5c\") " Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.052421 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/934d8f16-46da-4779-8ab8-31b05d1e8b5c-etc-openvswitch\") pod \"934d8f16-46da-4779-8ab8-31b05d1e8b5c\" (UID: \"934d8f16-46da-4779-8ab8-31b05d1e8b5c\") " Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.052438 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/934d8f16-46da-4779-8ab8-31b05d1e8b5c-run-systemd\") pod \"934d8f16-46da-4779-8ab8-31b05d1e8b5c\" (UID: \"934d8f16-46da-4779-8ab8-31b05d1e8b5c\") " Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.052501 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/934d8f16-46da-4779-8ab8-31b05d1e8b5c-host-cni-bin\") pod \"934d8f16-46da-4779-8ab8-31b05d1e8b5c\" (UID: \"934d8f16-46da-4779-8ab8-31b05d1e8b5c\") " Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.052521 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/934d8f16-46da-4779-8ab8-31b05d1e8b5c-host-kubelet\") pod \"934d8f16-46da-4779-8ab8-31b05d1e8b5c\" (UID: \"934d8f16-46da-4779-8ab8-31b05d1e8b5c\") " Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.052539 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/934d8f16-46da-4779-8ab8-31b05d1e8b5c-host-cni-netd\") pod \"934d8f16-46da-4779-8ab8-31b05d1e8b5c\" (UID: \"934d8f16-46da-4779-8ab8-31b05d1e8b5c\") " Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.052775 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/934d8f16-46da-4779-8ab8-31b05d1e8b5c-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "934d8f16-46da-4779-8ab8-31b05d1e8b5c" (UID: "934d8f16-46da-4779-8ab8-31b05d1e8b5c"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.052811 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/934d8f16-46da-4779-8ab8-31b05d1e8b5c-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "934d8f16-46da-4779-8ab8-31b05d1e8b5c" (UID: "934d8f16-46da-4779-8ab8-31b05d1e8b5c"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.052839 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/934d8f16-46da-4779-8ab8-31b05d1e8b5c-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "934d8f16-46da-4779-8ab8-31b05d1e8b5c" (UID: "934d8f16-46da-4779-8ab8-31b05d1e8b5c"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.053129 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/934d8f16-46da-4779-8ab8-31b05d1e8b5c-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "934d8f16-46da-4779-8ab8-31b05d1e8b5c" (UID: "934d8f16-46da-4779-8ab8-31b05d1e8b5c"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.053189 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/934d8f16-46da-4779-8ab8-31b05d1e8b5c-log-socket" (OuterVolumeSpecName: "log-socket") pod "934d8f16-46da-4779-8ab8-31b05d1e8b5c" (UID: "934d8f16-46da-4779-8ab8-31b05d1e8b5c"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.058026 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/934d8f16-46da-4779-8ab8-31b05d1e8b5c-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "934d8f16-46da-4779-8ab8-31b05d1e8b5c" (UID: "934d8f16-46da-4779-8ab8-31b05d1e8b5c"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.058196 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/934d8f16-46da-4779-8ab8-31b05d1e8b5c-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "934d8f16-46da-4779-8ab8-31b05d1e8b5c" (UID: "934d8f16-46da-4779-8ab8-31b05d1e8b5c"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.060159 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/934d8f16-46da-4779-8ab8-31b05d1e8b5c-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "934d8f16-46da-4779-8ab8-31b05d1e8b5c" (UID: "934d8f16-46da-4779-8ab8-31b05d1e8b5c"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.060197 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/934d8f16-46da-4779-8ab8-31b05d1e8b5c-node-log" (OuterVolumeSpecName: "node-log") pod "934d8f16-46da-4779-8ab8-31b05d1e8b5c" (UID: "934d8f16-46da-4779-8ab8-31b05d1e8b5c"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.060302 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/934d8f16-46da-4779-8ab8-31b05d1e8b5c-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "934d8f16-46da-4779-8ab8-31b05d1e8b5c" (UID: "934d8f16-46da-4779-8ab8-31b05d1e8b5c"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.060344 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/934d8f16-46da-4779-8ab8-31b05d1e8b5c-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "934d8f16-46da-4779-8ab8-31b05d1e8b5c" (UID: "934d8f16-46da-4779-8ab8-31b05d1e8b5c"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.060372 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/934d8f16-46da-4779-8ab8-31b05d1e8b5c-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "934d8f16-46da-4779-8ab8-31b05d1e8b5c" (UID: "934d8f16-46da-4779-8ab8-31b05d1e8b5c"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.060397 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/934d8f16-46da-4779-8ab8-31b05d1e8b5c-host-slash" (OuterVolumeSpecName: "host-slash") pod "934d8f16-46da-4779-8ab8-31b05d1e8b5c" (UID: "934d8f16-46da-4779-8ab8-31b05d1e8b5c"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.060420 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/934d8f16-46da-4779-8ab8-31b05d1e8b5c-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "934d8f16-46da-4779-8ab8-31b05d1e8b5c" (UID: "934d8f16-46da-4779-8ab8-31b05d1e8b5c"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.060442 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/934d8f16-46da-4779-8ab8-31b05d1e8b5c-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "934d8f16-46da-4779-8ab8-31b05d1e8b5c" (UID: "934d8f16-46da-4779-8ab8-31b05d1e8b5c"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.063235 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/934d8f16-46da-4779-8ab8-31b05d1e8b5c-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "934d8f16-46da-4779-8ab8-31b05d1e8b5c" (UID: "934d8f16-46da-4779-8ab8-31b05d1e8b5c"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.063411 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/934d8f16-46da-4779-8ab8-31b05d1e8b5c-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "934d8f16-46da-4779-8ab8-31b05d1e8b5c" (UID: "934d8f16-46da-4779-8ab8-31b05d1e8b5c"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.063547 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/934d8f16-46da-4779-8ab8-31b05d1e8b5c-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "934d8f16-46da-4779-8ab8-31b05d1e8b5c" (UID: "934d8f16-46da-4779-8ab8-31b05d1e8b5c"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.064472 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/934d8f16-46da-4779-8ab8-31b05d1e8b5c-kube-api-access-qw6q8" (OuterVolumeSpecName: "kube-api-access-qw6q8") pod "934d8f16-46da-4779-8ab8-31b05d1e8b5c" (UID: "934d8f16-46da-4779-8ab8-31b05d1e8b5c"). InnerVolumeSpecName "kube-api-access-qw6q8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.067545 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-kcr7d"] Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.068298 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="934d8f16-46da-4779-8ab8-31b05d1e8b5c" containerName="ovn-acl-logging" Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.068319 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="934d8f16-46da-4779-8ab8-31b05d1e8b5c" containerName="ovn-acl-logging" Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.068346 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="934d8f16-46da-4779-8ab8-31b05d1e8b5c" containerName="ovn-controller" Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.068353 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="934d8f16-46da-4779-8ab8-31b05d1e8b5c" containerName="ovn-controller" Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.068362 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="934d8f16-46da-4779-8ab8-31b05d1e8b5c" containerName="northd" Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.068369 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="934d8f16-46da-4779-8ab8-31b05d1e8b5c" containerName="northd" Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.068383 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="934d8f16-46da-4779-8ab8-31b05d1e8b5c" containerName="kube-rbac-proxy-node" Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.068390 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="934d8f16-46da-4779-8ab8-31b05d1e8b5c" containerName="kube-rbac-proxy-node" Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.068403 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="934d8f16-46da-4779-8ab8-31b05d1e8b5c" containerName="kubecfg-setup" Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.068410 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="934d8f16-46da-4779-8ab8-31b05d1e8b5c" containerName="kubecfg-setup" Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.068420 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="934d8f16-46da-4779-8ab8-31b05d1e8b5c" containerName="kube-rbac-proxy-ovn-metrics" Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.068427 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="934d8f16-46da-4779-8ab8-31b05d1e8b5c" containerName="kube-rbac-proxy-ovn-metrics" Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.068441 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="934d8f16-46da-4779-8ab8-31b05d1e8b5c" containerName="sbdb" Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.068447 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="934d8f16-46da-4779-8ab8-31b05d1e8b5c" containerName="sbdb" Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.068455 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="934d8f16-46da-4779-8ab8-31b05d1e8b5c" containerName="ovnkube-controller" Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.068462 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="934d8f16-46da-4779-8ab8-31b05d1e8b5c" containerName="ovnkube-controller" Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.068471 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="934d8f16-46da-4779-8ab8-31b05d1e8b5c" containerName="nbdb" Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.068478 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="934d8f16-46da-4779-8ab8-31b05d1e8b5c" containerName="nbdb" Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.068683 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="934d8f16-46da-4779-8ab8-31b05d1e8b5c" containerName="nbdb" Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.068697 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="934d8f16-46da-4779-8ab8-31b05d1e8b5c" containerName="ovn-controller" Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.069625 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="934d8f16-46da-4779-8ab8-31b05d1e8b5c" containerName="kube-rbac-proxy-node" Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.069643 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="934d8f16-46da-4779-8ab8-31b05d1e8b5c" containerName="kube-rbac-proxy-ovn-metrics" Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.069654 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="934d8f16-46da-4779-8ab8-31b05d1e8b5c" containerName="sbdb" Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.069663 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="934d8f16-46da-4779-8ab8-31b05d1e8b5c" containerName="ovn-acl-logging" Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.069674 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="934d8f16-46da-4779-8ab8-31b05d1e8b5c" containerName="ovnkube-controller" Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.069685 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="934d8f16-46da-4779-8ab8-31b05d1e8b5c" containerName="northd" Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.080964 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-h6kjx" Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.093609 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/934d8f16-46da-4779-8ab8-31b05d1e8b5c-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "934d8f16-46da-4779-8ab8-31b05d1e8b5c" (UID: "934d8f16-46da-4779-8ab8-31b05d1e8b5c"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 14:21:48 crc kubenswrapper[5108]: W1212 14:21:48.102142 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod30ffcbc2_2312_46dd_9996_465d07f8e07c.slice/crio-638a68a5b335a25f9ed321e8183cabadfc6892efff3eeb672bf43499e52b5aba WatchSource:0}: Error finding container 638a68a5b335a25f9ed321e8183cabadfc6892efff3eeb672bf43499e52b5aba: Status 404 returned error can't find the container with id 638a68a5b335a25f9ed321e8183cabadfc6892efff3eeb672bf43499e52b5aba Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.111533 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-kcr7d" Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.114456 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-wzxz4"] Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.131362 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-wzxz4"] Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.154155 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e747b972-dc79-4e6e-9ac3-1b64ff186e85-ovnkube-config\") pod \"ovnkube-node-kcr7d\" (UID: \"e747b972-dc79-4e6e-9ac3-1b64ff186e85\") " pod="openshift-ovn-kubernetes/ovnkube-node-kcr7d" Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.154207 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/e747b972-dc79-4e6e-9ac3-1b64ff186e85-host-slash\") pod \"ovnkube-node-kcr7d\" (UID: \"e747b972-dc79-4e6e-9ac3-1b64ff186e85\") " pod="openshift-ovn-kubernetes/ovnkube-node-kcr7d" Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.154226 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zj72r\" (UniqueName: \"kubernetes.io/projected/e747b972-dc79-4e6e-9ac3-1b64ff186e85-kube-api-access-zj72r\") pod \"ovnkube-node-kcr7d\" (UID: \"e747b972-dc79-4e6e-9ac3-1b64ff186e85\") " pod="openshift-ovn-kubernetes/ovnkube-node-kcr7d" Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.154248 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/e747b972-dc79-4e6e-9ac3-1b64ff186e85-systemd-units\") pod \"ovnkube-node-kcr7d\" (UID: \"e747b972-dc79-4e6e-9ac3-1b64ff186e85\") " pod="openshift-ovn-kubernetes/ovnkube-node-kcr7d" Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.154265 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/e747b972-dc79-4e6e-9ac3-1b64ff186e85-run-ovn\") pod \"ovnkube-node-kcr7d\" (UID: \"e747b972-dc79-4e6e-9ac3-1b64ff186e85\") " pod="openshift-ovn-kubernetes/ovnkube-node-kcr7d" Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.154280 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e747b972-dc79-4e6e-9ac3-1b64ff186e85-host-cni-bin\") pod \"ovnkube-node-kcr7d\" (UID: \"e747b972-dc79-4e6e-9ac3-1b64ff186e85\") " pod="openshift-ovn-kubernetes/ovnkube-node-kcr7d" Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.154311 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/e747b972-dc79-4e6e-9ac3-1b64ff186e85-run-systemd\") pod \"ovnkube-node-kcr7d\" (UID: \"e747b972-dc79-4e6e-9ac3-1b64ff186e85\") " pod="openshift-ovn-kubernetes/ovnkube-node-kcr7d" Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.154347 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e747b972-dc79-4e6e-9ac3-1b64ff186e85-ovn-node-metrics-cert\") pod \"ovnkube-node-kcr7d\" (UID: \"e747b972-dc79-4e6e-9ac3-1b64ff186e85\") " pod="openshift-ovn-kubernetes/ovnkube-node-kcr7d" Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.154364 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e747b972-dc79-4e6e-9ac3-1b64ff186e85-etc-openvswitch\") pod \"ovnkube-node-kcr7d\" (UID: \"e747b972-dc79-4e6e-9ac3-1b64ff186e85\") " pod="openshift-ovn-kubernetes/ovnkube-node-kcr7d" Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.154382 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e747b972-dc79-4e6e-9ac3-1b64ff186e85-run-openvswitch\") pod \"ovnkube-node-kcr7d\" (UID: \"e747b972-dc79-4e6e-9ac3-1b64ff186e85\") " pod="openshift-ovn-kubernetes/ovnkube-node-kcr7d" Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.154395 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e747b972-dc79-4e6e-9ac3-1b64ff186e85-env-overrides\") pod \"ovnkube-node-kcr7d\" (UID: \"e747b972-dc79-4e6e-9ac3-1b64ff186e85\") " pod="openshift-ovn-kubernetes/ovnkube-node-kcr7d" Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.154409 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/e747b972-dc79-4e6e-9ac3-1b64ff186e85-log-socket\") pod \"ovnkube-node-kcr7d\" (UID: \"e747b972-dc79-4e6e-9ac3-1b64ff186e85\") " pod="openshift-ovn-kubernetes/ovnkube-node-kcr7d" Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.154434 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e747b972-dc79-4e6e-9ac3-1b64ff186e85-var-lib-openvswitch\") pod \"ovnkube-node-kcr7d\" (UID: \"e747b972-dc79-4e6e-9ac3-1b64ff186e85\") " pod="openshift-ovn-kubernetes/ovnkube-node-kcr7d" Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.154448 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e747b972-dc79-4e6e-9ac3-1b64ff186e85-host-run-ovn-kubernetes\") pod \"ovnkube-node-kcr7d\" (UID: \"e747b972-dc79-4e6e-9ac3-1b64ff186e85\") " pod="openshift-ovn-kubernetes/ovnkube-node-kcr7d" Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.154464 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/e747b972-dc79-4e6e-9ac3-1b64ff186e85-host-kubelet\") pod \"ovnkube-node-kcr7d\" (UID: \"e747b972-dc79-4e6e-9ac3-1b64ff186e85\") " pod="openshift-ovn-kubernetes/ovnkube-node-kcr7d" Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.154478 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/e747b972-dc79-4e6e-9ac3-1b64ff186e85-node-log\") pod \"ovnkube-node-kcr7d\" (UID: \"e747b972-dc79-4e6e-9ac3-1b64ff186e85\") " pod="openshift-ovn-kubernetes/ovnkube-node-kcr7d" Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.154493 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e747b972-dc79-4e6e-9ac3-1b64ff186e85-host-cni-netd\") pod \"ovnkube-node-kcr7d\" (UID: \"e747b972-dc79-4e6e-9ac3-1b64ff186e85\") " pod="openshift-ovn-kubernetes/ovnkube-node-kcr7d" Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.154507 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e747b972-dc79-4e6e-9ac3-1b64ff186e85-host-run-netns\") pod \"ovnkube-node-kcr7d\" (UID: \"e747b972-dc79-4e6e-9ac3-1b64ff186e85\") " pod="openshift-ovn-kubernetes/ovnkube-node-kcr7d" Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.154528 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/e747b972-dc79-4e6e-9ac3-1b64ff186e85-ovnkube-script-lib\") pod \"ovnkube-node-kcr7d\" (UID: \"e747b972-dc79-4e6e-9ac3-1b64ff186e85\") " pod="openshift-ovn-kubernetes/ovnkube-node-kcr7d" Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.154558 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e747b972-dc79-4e6e-9ac3-1b64ff186e85-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-kcr7d\" (UID: \"e747b972-dc79-4e6e-9ac3-1b64ff186e85\") " pod="openshift-ovn-kubernetes/ovnkube-node-kcr7d" Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.154605 5108 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/934d8f16-46da-4779-8ab8-31b05d1e8b5c-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.154615 5108 reconciler_common.go:299] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/934d8f16-46da-4779-8ab8-31b05d1e8b5c-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.154623 5108 reconciler_common.go:299] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/934d8f16-46da-4779-8ab8-31b05d1e8b5c-run-systemd\") on node \"crc\" DevicePath \"\"" Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.154631 5108 reconciler_common.go:299] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/934d8f16-46da-4779-8ab8-31b05d1e8b5c-host-cni-bin\") on node \"crc\" DevicePath \"\"" Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.154643 5108 reconciler_common.go:299] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/934d8f16-46da-4779-8ab8-31b05d1e8b5c-host-kubelet\") on node \"crc\" DevicePath \"\"" Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.154755 5108 reconciler_common.go:299] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/934d8f16-46da-4779-8ab8-31b05d1e8b5c-host-cni-netd\") on node \"crc\" DevicePath \"\"" Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.154764 5108 reconciler_common.go:299] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/934d8f16-46da-4779-8ab8-31b05d1e8b5c-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.154774 5108 reconciler_common.go:299] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/934d8f16-46da-4779-8ab8-31b05d1e8b5c-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.154782 5108 reconciler_common.go:299] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/934d8f16-46da-4779-8ab8-31b05d1e8b5c-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.154790 5108 reconciler_common.go:299] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/934d8f16-46da-4779-8ab8-31b05d1e8b5c-log-socket\") on node \"crc\" DevicePath \"\"" Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.154799 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qw6q8\" (UniqueName: \"kubernetes.io/projected/934d8f16-46da-4779-8ab8-31b05d1e8b5c-kube-api-access-qw6q8\") on node \"crc\" DevicePath \"\"" Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.154806 5108 reconciler_common.go:299] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/934d8f16-46da-4779-8ab8-31b05d1e8b5c-node-log\") on node \"crc\" DevicePath \"\"" Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.154813 5108 reconciler_common.go:299] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/934d8f16-46da-4779-8ab8-31b05d1e8b5c-run-ovn\") on node \"crc\" DevicePath \"\"" Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.154821 5108 reconciler_common.go:299] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/934d8f16-46da-4779-8ab8-31b05d1e8b5c-host-run-netns\") on node \"crc\" DevicePath \"\"" Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.154828 5108 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/934d8f16-46da-4779-8ab8-31b05d1e8b5c-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.154865 5108 reconciler_common.go:299] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/934d8f16-46da-4779-8ab8-31b05d1e8b5c-systemd-units\") on node \"crc\" DevicePath \"\"" Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.154872 5108 reconciler_common.go:299] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/934d8f16-46da-4779-8ab8-31b05d1e8b5c-run-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.154880 5108 reconciler_common.go:299] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/934d8f16-46da-4779-8ab8-31b05d1e8b5c-host-slash\") on node \"crc\" DevicePath \"\"" Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.154888 5108 reconciler_common.go:299] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/934d8f16-46da-4779-8ab8-31b05d1e8b5c-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.154896 5108 reconciler_common.go:299] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/934d8f16-46da-4779-8ab8-31b05d1e8b5c-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.255859 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e747b972-dc79-4e6e-9ac3-1b64ff186e85-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-kcr7d\" (UID: \"e747b972-dc79-4e6e-9ac3-1b64ff186e85\") " pod="openshift-ovn-kubernetes/ovnkube-node-kcr7d" Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.255931 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e747b972-dc79-4e6e-9ac3-1b64ff186e85-ovnkube-config\") pod \"ovnkube-node-kcr7d\" (UID: \"e747b972-dc79-4e6e-9ac3-1b64ff186e85\") " pod="openshift-ovn-kubernetes/ovnkube-node-kcr7d" Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.255964 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/e747b972-dc79-4e6e-9ac3-1b64ff186e85-host-slash\") pod \"ovnkube-node-kcr7d\" (UID: \"e747b972-dc79-4e6e-9ac3-1b64ff186e85\") " pod="openshift-ovn-kubernetes/ovnkube-node-kcr7d" Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.256027 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/e747b972-dc79-4e6e-9ac3-1b64ff186e85-host-slash\") pod \"ovnkube-node-kcr7d\" (UID: \"e747b972-dc79-4e6e-9ac3-1b64ff186e85\") " pod="openshift-ovn-kubernetes/ovnkube-node-kcr7d" Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.256094 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e747b972-dc79-4e6e-9ac3-1b64ff186e85-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-kcr7d\" (UID: \"e747b972-dc79-4e6e-9ac3-1b64ff186e85\") " pod="openshift-ovn-kubernetes/ovnkube-node-kcr7d" Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.256194 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zj72r\" (UniqueName: \"kubernetes.io/projected/e747b972-dc79-4e6e-9ac3-1b64ff186e85-kube-api-access-zj72r\") pod \"ovnkube-node-kcr7d\" (UID: \"e747b972-dc79-4e6e-9ac3-1b64ff186e85\") " pod="openshift-ovn-kubernetes/ovnkube-node-kcr7d" Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.256450 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/e747b972-dc79-4e6e-9ac3-1b64ff186e85-systemd-units\") pod \"ovnkube-node-kcr7d\" (UID: \"e747b972-dc79-4e6e-9ac3-1b64ff186e85\") " pod="openshift-ovn-kubernetes/ovnkube-node-kcr7d" Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.256597 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/e747b972-dc79-4e6e-9ac3-1b64ff186e85-systemd-units\") pod \"ovnkube-node-kcr7d\" (UID: \"e747b972-dc79-4e6e-9ac3-1b64ff186e85\") " pod="openshift-ovn-kubernetes/ovnkube-node-kcr7d" Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.256627 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/e747b972-dc79-4e6e-9ac3-1b64ff186e85-run-ovn\") pod \"ovnkube-node-kcr7d\" (UID: \"e747b972-dc79-4e6e-9ac3-1b64ff186e85\") " pod="openshift-ovn-kubernetes/ovnkube-node-kcr7d" Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.256974 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e747b972-dc79-4e6e-9ac3-1b64ff186e85-host-cni-bin\") pod \"ovnkube-node-kcr7d\" (UID: \"e747b972-dc79-4e6e-9ac3-1b64ff186e85\") " pod="openshift-ovn-kubernetes/ovnkube-node-kcr7d" Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.256667 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/e747b972-dc79-4e6e-9ac3-1b64ff186e85-run-ovn\") pod \"ovnkube-node-kcr7d\" (UID: \"e747b972-dc79-4e6e-9ac3-1b64ff186e85\") " pod="openshift-ovn-kubernetes/ovnkube-node-kcr7d" Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.257043 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/e747b972-dc79-4e6e-9ac3-1b64ff186e85-run-systemd\") pod \"ovnkube-node-kcr7d\" (UID: \"e747b972-dc79-4e6e-9ac3-1b64ff186e85\") " pod="openshift-ovn-kubernetes/ovnkube-node-kcr7d" Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.257127 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e747b972-dc79-4e6e-9ac3-1b64ff186e85-host-cni-bin\") pod \"ovnkube-node-kcr7d\" (UID: \"e747b972-dc79-4e6e-9ac3-1b64ff186e85\") " pod="openshift-ovn-kubernetes/ovnkube-node-kcr7d" Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.257164 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e747b972-dc79-4e6e-9ac3-1b64ff186e85-ovn-node-metrics-cert\") pod \"ovnkube-node-kcr7d\" (UID: \"e747b972-dc79-4e6e-9ac3-1b64ff186e85\") " pod="openshift-ovn-kubernetes/ovnkube-node-kcr7d" Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.257196 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e747b972-dc79-4e6e-9ac3-1b64ff186e85-etc-openvswitch\") pod \"ovnkube-node-kcr7d\" (UID: \"e747b972-dc79-4e6e-9ac3-1b64ff186e85\") " pod="openshift-ovn-kubernetes/ovnkube-node-kcr7d" Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.257233 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e747b972-dc79-4e6e-9ac3-1b64ff186e85-run-openvswitch\") pod \"ovnkube-node-kcr7d\" (UID: \"e747b972-dc79-4e6e-9ac3-1b64ff186e85\") " pod="openshift-ovn-kubernetes/ovnkube-node-kcr7d" Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.257239 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/e747b972-dc79-4e6e-9ac3-1b64ff186e85-run-systemd\") pod \"ovnkube-node-kcr7d\" (UID: \"e747b972-dc79-4e6e-9ac3-1b64ff186e85\") " pod="openshift-ovn-kubernetes/ovnkube-node-kcr7d" Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.257263 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e747b972-dc79-4e6e-9ac3-1b64ff186e85-env-overrides\") pod \"ovnkube-node-kcr7d\" (UID: \"e747b972-dc79-4e6e-9ac3-1b64ff186e85\") " pod="openshift-ovn-kubernetes/ovnkube-node-kcr7d" Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.257280 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e747b972-dc79-4e6e-9ac3-1b64ff186e85-etc-openvswitch\") pod \"ovnkube-node-kcr7d\" (UID: \"e747b972-dc79-4e6e-9ac3-1b64ff186e85\") " pod="openshift-ovn-kubernetes/ovnkube-node-kcr7d" Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.257286 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/e747b972-dc79-4e6e-9ac3-1b64ff186e85-log-socket\") pod \"ovnkube-node-kcr7d\" (UID: \"e747b972-dc79-4e6e-9ac3-1b64ff186e85\") " pod="openshift-ovn-kubernetes/ovnkube-node-kcr7d" Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.257308 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e747b972-dc79-4e6e-9ac3-1b64ff186e85-run-openvswitch\") pod \"ovnkube-node-kcr7d\" (UID: \"e747b972-dc79-4e6e-9ac3-1b64ff186e85\") " pod="openshift-ovn-kubernetes/ovnkube-node-kcr7d" Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.257342 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e747b972-dc79-4e6e-9ac3-1b64ff186e85-var-lib-openvswitch\") pod \"ovnkube-node-kcr7d\" (UID: \"e747b972-dc79-4e6e-9ac3-1b64ff186e85\") " pod="openshift-ovn-kubernetes/ovnkube-node-kcr7d" Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.257368 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e747b972-dc79-4e6e-9ac3-1b64ff186e85-host-run-ovn-kubernetes\") pod \"ovnkube-node-kcr7d\" (UID: \"e747b972-dc79-4e6e-9ac3-1b64ff186e85\") " pod="openshift-ovn-kubernetes/ovnkube-node-kcr7d" Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.257403 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/e747b972-dc79-4e6e-9ac3-1b64ff186e85-host-kubelet\") pod \"ovnkube-node-kcr7d\" (UID: \"e747b972-dc79-4e6e-9ac3-1b64ff186e85\") " pod="openshift-ovn-kubernetes/ovnkube-node-kcr7d" Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.257427 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/e747b972-dc79-4e6e-9ac3-1b64ff186e85-node-log\") pod \"ovnkube-node-kcr7d\" (UID: \"e747b972-dc79-4e6e-9ac3-1b64ff186e85\") " pod="openshift-ovn-kubernetes/ovnkube-node-kcr7d" Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.257455 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e747b972-dc79-4e6e-9ac3-1b64ff186e85-host-cni-netd\") pod \"ovnkube-node-kcr7d\" (UID: \"e747b972-dc79-4e6e-9ac3-1b64ff186e85\") " pod="openshift-ovn-kubernetes/ovnkube-node-kcr7d" Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.257479 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e747b972-dc79-4e6e-9ac3-1b64ff186e85-host-run-netns\") pod \"ovnkube-node-kcr7d\" (UID: \"e747b972-dc79-4e6e-9ac3-1b64ff186e85\") " pod="openshift-ovn-kubernetes/ovnkube-node-kcr7d" Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.257533 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/e747b972-dc79-4e6e-9ac3-1b64ff186e85-ovnkube-script-lib\") pod \"ovnkube-node-kcr7d\" (UID: \"e747b972-dc79-4e6e-9ac3-1b64ff186e85\") " pod="openshift-ovn-kubernetes/ovnkube-node-kcr7d" Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.257705 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e747b972-dc79-4e6e-9ac3-1b64ff186e85-env-overrides\") pod \"ovnkube-node-kcr7d\" (UID: \"e747b972-dc79-4e6e-9ac3-1b64ff186e85\") " pod="openshift-ovn-kubernetes/ovnkube-node-kcr7d" Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.257746 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/e747b972-dc79-4e6e-9ac3-1b64ff186e85-host-kubelet\") pod \"ovnkube-node-kcr7d\" (UID: \"e747b972-dc79-4e6e-9ac3-1b64ff186e85\") " pod="openshift-ovn-kubernetes/ovnkube-node-kcr7d" Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.257772 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/e747b972-dc79-4e6e-9ac3-1b64ff186e85-log-socket\") pod \"ovnkube-node-kcr7d\" (UID: \"e747b972-dc79-4e6e-9ac3-1b64ff186e85\") " pod="openshift-ovn-kubernetes/ovnkube-node-kcr7d" Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.257793 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e747b972-dc79-4e6e-9ac3-1b64ff186e85-var-lib-openvswitch\") pod \"ovnkube-node-kcr7d\" (UID: \"e747b972-dc79-4e6e-9ac3-1b64ff186e85\") " pod="openshift-ovn-kubernetes/ovnkube-node-kcr7d" Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.257821 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e747b972-dc79-4e6e-9ac3-1b64ff186e85-host-run-ovn-kubernetes\") pod \"ovnkube-node-kcr7d\" (UID: \"e747b972-dc79-4e6e-9ac3-1b64ff186e85\") " pod="openshift-ovn-kubernetes/ovnkube-node-kcr7d" Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.257844 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e747b972-dc79-4e6e-9ac3-1b64ff186e85-host-cni-netd\") pod \"ovnkube-node-kcr7d\" (UID: \"e747b972-dc79-4e6e-9ac3-1b64ff186e85\") " pod="openshift-ovn-kubernetes/ovnkube-node-kcr7d" Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.257867 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/e747b972-dc79-4e6e-9ac3-1b64ff186e85-node-log\") pod \"ovnkube-node-kcr7d\" (UID: \"e747b972-dc79-4e6e-9ac3-1b64ff186e85\") " pod="openshift-ovn-kubernetes/ovnkube-node-kcr7d" Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.257887 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e747b972-dc79-4e6e-9ac3-1b64ff186e85-host-run-netns\") pod \"ovnkube-node-kcr7d\" (UID: \"e747b972-dc79-4e6e-9ac3-1b64ff186e85\") " pod="openshift-ovn-kubernetes/ovnkube-node-kcr7d" Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.256863 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e747b972-dc79-4e6e-9ac3-1b64ff186e85-ovnkube-config\") pod \"ovnkube-node-kcr7d\" (UID: \"e747b972-dc79-4e6e-9ac3-1b64ff186e85\") " pod="openshift-ovn-kubernetes/ovnkube-node-kcr7d" Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.258448 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/e747b972-dc79-4e6e-9ac3-1b64ff186e85-ovnkube-script-lib\") pod \"ovnkube-node-kcr7d\" (UID: \"e747b972-dc79-4e6e-9ac3-1b64ff186e85\") " pod="openshift-ovn-kubernetes/ovnkube-node-kcr7d" Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.261135 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e747b972-dc79-4e6e-9ac3-1b64ff186e85-ovn-node-metrics-cert\") pod \"ovnkube-node-kcr7d\" (UID: \"e747b972-dc79-4e6e-9ac3-1b64ff186e85\") " pod="openshift-ovn-kubernetes/ovnkube-node-kcr7d" Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.272877 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zj72r\" (UniqueName: \"kubernetes.io/projected/e747b972-dc79-4e6e-9ac3-1b64ff186e85-kube-api-access-zj72r\") pod \"ovnkube-node-kcr7d\" (UID: \"e747b972-dc79-4e6e-9ac3-1b64ff186e85\") " pod="openshift-ovn-kubernetes/ovnkube-node-kcr7d" Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.436432 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-kcr7d" Dec 12 14:21:48 crc kubenswrapper[5108]: W1212 14:21:48.457736 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode747b972_dc79_4e6e_9ac3_1b64ff186e85.slice/crio-75f82a62fe8655e6dc2ee8a5c8ccd1a718e3e177670d3b8335587e3dbb60f4b2 WatchSource:0}: Error finding container 75f82a62fe8655e6dc2ee8a5c8ccd1a718e3e177670d3b8335587e3dbb60f4b2: Status 404 returned error can't find the container with id 75f82a62fe8655e6dc2ee8a5c8ccd1a718e3e177670d3b8335587e3dbb60f4b2 Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.756528 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ztpws_1e8c3045-7200-4b39-9531-5ce86ab0b5b5/kube-multus/0.log" Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.756810 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-ztpws" event={"ID":"1e8c3045-7200-4b39-9531-5ce86ab0b5b5","Type":"ContainerStarted","Data":"9ecfc7341f95c03ae3170175cc2e65ace31567c896b2023ad97587cd77bc8381"} Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.767373 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-h6kjx" event={"ID":"30ffcbc2-2312-46dd-9996-465d07f8e07c","Type":"ContainerStarted","Data":"580b784707b55eafd2d509bf7a25536c92ee5045077aae0ea4500cc0e7cece32"} Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.767487 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-h6kjx" event={"ID":"30ffcbc2-2312-46dd-9996-465d07f8e07c","Type":"ContainerStarted","Data":"ed27f423d2123d18dca5b21218fcc78dd705a3f16e6e532f05e02a99107245b0"} Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.767502 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-h6kjx" event={"ID":"30ffcbc2-2312-46dd-9996-465d07f8e07c","Type":"ContainerStarted","Data":"638a68a5b335a25f9ed321e8183cabadfc6892efff3eeb672bf43499e52b5aba"} Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.776969 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-69wzc_934d8f16-46da-4779-8ab8-31b05d1e8b5c/ovn-acl-logging/0.log" Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.777825 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-69wzc_934d8f16-46da-4779-8ab8-31b05d1e8b5c/ovn-controller/0.log" Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.778879 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-69wzc" Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.778948 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69wzc" event={"ID":"934d8f16-46da-4779-8ab8-31b05d1e8b5c","Type":"ContainerDied","Data":"b992142de4239b8b21dcd0596986f91d86fd70204d0fbf8258995c96c7f0ca90"} Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.779323 5108 scope.go:117] "RemoveContainer" containerID="3904bdd05696ca809605e7ff25066a563efbdf5d7e944cc4cc56b32b255f428e" Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.786945 5108 generic.go:358] "Generic (PLEG): container finished" podID="e747b972-dc79-4e6e-9ac3-1b64ff186e85" containerID="0f6bcbe3af4b369d17583f55bbe1d27b229c58f2021de076e4b55c22e52a5b7d" exitCode=0 Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.787060 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kcr7d" event={"ID":"e747b972-dc79-4e6e-9ac3-1b64ff186e85","Type":"ContainerDied","Data":"0f6bcbe3af4b369d17583f55bbe1d27b229c58f2021de076e4b55c22e52a5b7d"} Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.787121 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kcr7d" event={"ID":"e747b972-dc79-4e6e-9ac3-1b64ff186e85","Type":"ContainerStarted","Data":"75f82a62fe8655e6dc2ee8a5c8ccd1a718e3e177670d3b8335587e3dbb60f4b2"} Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.797699 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-h6kjx" podStartSLOduration=3.797682457 podStartE2EDuration="3.797682457s" podCreationTimestamp="2025-12-12 14:21:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 14:21:48.795676604 +0000 UTC m=+661.703667783" watchObservedRunningTime="2025-12-12 14:21:48.797682457 +0000 UTC m=+661.705673616" Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.820808 5108 scope.go:117] "RemoveContainer" containerID="e77382ed2a634eba38b927f4046daeb8627465aaa3f0f1328f36300bd391925d" Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.854073 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-69wzc"] Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.855961 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-69wzc"] Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.856186 5108 scope.go:117] "RemoveContainer" containerID="5615ed6026dc7cc3d5c646cc273ee282bf8790a71ae4a50ea8a8067550bf067f" Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.885984 5108 scope.go:117] "RemoveContainer" containerID="f941cb5e5a8e0f562bf1274b00288a3e58fe27459711c3e231201377c4cb7a10" Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.902879 5108 scope.go:117] "RemoveContainer" containerID="bcfb8a5acb80dea15b10468780de99a6fb687ef49e693d7fb552ed187b78607b" Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.919052 5108 scope.go:117] "RemoveContainer" containerID="a1f681c1c61bf023f01cbca01e489ba9853462e7471cc85cc24e1b5da86096ea" Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.937346 5108 scope.go:117] "RemoveContainer" containerID="9919b79275f59aff26b0acffc3954a149d74c9173a5c44d77512934a99cadd03" Dec 12 14:21:48 crc kubenswrapper[5108]: I1212 14:21:48.991326 5108 scope.go:117] "RemoveContainer" containerID="9da4bf297887a716ed638824bbce5aca0592ab7354dff37269b576a4154f6b66" Dec 12 14:21:49 crc kubenswrapper[5108]: I1212 14:21:49.014200 5108 scope.go:117] "RemoveContainer" containerID="68c7e7ce42d7d01313b8ae6c15bcab4983632d2398dd1b85bcfa8767a8ee7b30" Dec 12 14:21:49 crc kubenswrapper[5108]: I1212 14:21:49.421333 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="934d8f16-46da-4779-8ab8-31b05d1e8b5c" path="/var/lib/kubelet/pods/934d8f16-46da-4779-8ab8-31b05d1e8b5c/volumes" Dec 12 14:21:49 crc kubenswrapper[5108]: I1212 14:21:49.422447 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8603a7b-d127-481c-8901-fff3b6f9f38b" path="/var/lib/kubelet/pods/d8603a7b-d127-481c-8901-fff3b6f9f38b/volumes" Dec 12 14:21:49 crc kubenswrapper[5108]: I1212 14:21:49.799189 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kcr7d" event={"ID":"e747b972-dc79-4e6e-9ac3-1b64ff186e85","Type":"ContainerStarted","Data":"255bbd384b1c0e013ce5a2c3eecb9876f19cdb0d7e3a21e65fbcd6a0955e54cb"} Dec 12 14:21:49 crc kubenswrapper[5108]: I1212 14:21:49.799244 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kcr7d" event={"ID":"e747b972-dc79-4e6e-9ac3-1b64ff186e85","Type":"ContainerStarted","Data":"231f415321454ced45bf596a42beae92854148b35fb7e0543349d19209dd65dd"} Dec 12 14:21:49 crc kubenswrapper[5108]: I1212 14:21:49.799260 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kcr7d" event={"ID":"e747b972-dc79-4e6e-9ac3-1b64ff186e85","Type":"ContainerStarted","Data":"da24b51f27f81ceccae11b20e8f2c0cb0469802f34a54d26ff41225319851354"} Dec 12 14:21:49 crc kubenswrapper[5108]: I1212 14:21:49.799272 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kcr7d" event={"ID":"e747b972-dc79-4e6e-9ac3-1b64ff186e85","Type":"ContainerStarted","Data":"8bb6f764f913a2f7c4b1e811d1534ecca508f51e2e63f7d171d8d0d5833b7f94"} Dec 12 14:21:49 crc kubenswrapper[5108]: I1212 14:21:49.799285 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kcr7d" event={"ID":"e747b972-dc79-4e6e-9ac3-1b64ff186e85","Type":"ContainerStarted","Data":"a46a97237938ce565c79ae5d71bd2f27dc447bc279bd4890f156b3c1aab3888e"} Dec 12 14:21:49 crc kubenswrapper[5108]: I1212 14:21:49.986578 5108 patch_prober.go:28] interesting pod/machine-config-daemon-w294k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 14:21:49 crc kubenswrapper[5108]: I1212 14:21:49.986646 5108 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w294k" podUID="fcb30c12-8b29-461d-ab3e-a76577b664d6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 14:21:50 crc kubenswrapper[5108]: I1212 14:21:50.811106 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kcr7d" event={"ID":"e747b972-dc79-4e6e-9ac3-1b64ff186e85","Type":"ContainerStarted","Data":"24d8d339bbfdf73de9609b9fe3ef60b2f418bb2f47edfa20015adb4abd466ad5"} Dec 12 14:21:52 crc kubenswrapper[5108]: I1212 14:21:52.831379 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kcr7d" event={"ID":"e747b972-dc79-4e6e-9ac3-1b64ff186e85","Type":"ContainerStarted","Data":"5fd2f48387a6d7f97756351e8f05e229233217a2674ac342997a7282427810fa"} Dec 12 14:21:55 crc kubenswrapper[5108]: I1212 14:21:55.851802 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kcr7d" event={"ID":"e747b972-dc79-4e6e-9ac3-1b64ff186e85","Type":"ContainerStarted","Data":"32e186e0673088afb61bf665091830b0c4d1d39bc9d6a886bfa1483d95d82098"} Dec 12 14:21:55 crc kubenswrapper[5108]: I1212 14:21:55.852413 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-kcr7d" Dec 12 14:21:55 crc kubenswrapper[5108]: I1212 14:21:55.852429 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-kcr7d" Dec 12 14:21:55 crc kubenswrapper[5108]: I1212 14:21:55.878303 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-kcr7d" podStartSLOduration=7.878276795 podStartE2EDuration="7.878276795s" podCreationTimestamp="2025-12-12 14:21:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 14:21:55.878252904 +0000 UTC m=+668.786244083" watchObservedRunningTime="2025-12-12 14:21:55.878276795 +0000 UTC m=+668.786267954" Dec 12 14:21:55 crc kubenswrapper[5108]: I1212 14:21:55.898553 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-kcr7d" Dec 12 14:21:56 crc kubenswrapper[5108]: I1212 14:21:56.860215 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-kcr7d" Dec 12 14:21:56 crc kubenswrapper[5108]: I1212 14:21:56.909123 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-kcr7d" Dec 12 14:22:19 crc kubenswrapper[5108]: I1212 14:22:19.986528 5108 patch_prober.go:28] interesting pod/machine-config-daemon-w294k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 14:22:19 crc kubenswrapper[5108]: I1212 14:22:19.986801 5108 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w294k" podUID="fcb30c12-8b29-461d-ab3e-a76577b664d6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 14:22:28 crc kubenswrapper[5108]: I1212 14:22:28.890159 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-kcr7d" Dec 12 14:22:49 crc kubenswrapper[5108]: I1212 14:22:49.986937 5108 patch_prober.go:28] interesting pod/machine-config-daemon-w294k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 14:22:49 crc kubenswrapper[5108]: I1212 14:22:49.987502 5108 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w294k" podUID="fcb30c12-8b29-461d-ab3e-a76577b664d6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 14:22:49 crc kubenswrapper[5108]: I1212 14:22:49.987551 5108 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-w294k" Dec 12 14:22:49 crc kubenswrapper[5108]: I1212 14:22:49.988117 5108 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f71b57f29e2ff270e6b601b2c583e9205b2e1619208bf2a39737e6a39c51a2f1"} pod="openshift-machine-config-operator/machine-config-daemon-w294k" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 12 14:22:49 crc kubenswrapper[5108]: I1212 14:22:49.988171 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-w294k" podUID="fcb30c12-8b29-461d-ab3e-a76577b664d6" containerName="machine-config-daemon" containerID="cri-o://f71b57f29e2ff270e6b601b2c583e9205b2e1619208bf2a39737e6a39c51a2f1" gracePeriod=600 Dec 12 14:22:51 crc kubenswrapper[5108]: I1212 14:22:51.172410 5108 generic.go:358] "Generic (PLEG): container finished" podID="fcb30c12-8b29-461d-ab3e-a76577b664d6" containerID="f71b57f29e2ff270e6b601b2c583e9205b2e1619208bf2a39737e6a39c51a2f1" exitCode=0 Dec 12 14:22:51 crc kubenswrapper[5108]: I1212 14:22:51.172486 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w294k" event={"ID":"fcb30c12-8b29-461d-ab3e-a76577b664d6","Type":"ContainerDied","Data":"f71b57f29e2ff270e6b601b2c583e9205b2e1619208bf2a39737e6a39c51a2f1"} Dec 12 14:22:51 crc kubenswrapper[5108]: I1212 14:22:51.172564 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w294k" event={"ID":"fcb30c12-8b29-461d-ab3e-a76577b664d6","Type":"ContainerStarted","Data":"4ab7ac5ee0d1edb7108d7f5ec4e957c0f7674bd3372b098711a9332769c2a4ec"} Dec 12 14:22:51 crc kubenswrapper[5108]: I1212 14:22:51.172584 5108 scope.go:117] "RemoveContainer" containerID="7adea82340555f4f84d2706f521506fd50722f67e31c512652c717dcaa73a33c" Dec 12 14:23:37 crc kubenswrapper[5108]: I1212 14:23:37.699981 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mkdxn"] Dec 12 14:23:37 crc kubenswrapper[5108]: I1212 14:23:37.709546 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-mkdxn" podUID="ba7bd770-9185-40a3-ae63-961ee83bd38e" containerName="registry-server" containerID="cri-o://9cdd46b794513151581bbd5c8662d1ea1f953faa845143fabe20e1b42821df42" gracePeriod=30 Dec 12 14:23:38 crc kubenswrapper[5108]: I1212 14:23:38.051991 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mkdxn" Dec 12 14:23:38 crc kubenswrapper[5108]: I1212 14:23:38.088059 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ljvc6\" (UniqueName: \"kubernetes.io/projected/ba7bd770-9185-40a3-ae63-961ee83bd38e-kube-api-access-ljvc6\") pod \"ba7bd770-9185-40a3-ae63-961ee83bd38e\" (UID: \"ba7bd770-9185-40a3-ae63-961ee83bd38e\") " Dec 12 14:23:38 crc kubenswrapper[5108]: I1212 14:23:38.088158 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba7bd770-9185-40a3-ae63-961ee83bd38e-utilities\") pod \"ba7bd770-9185-40a3-ae63-961ee83bd38e\" (UID: \"ba7bd770-9185-40a3-ae63-961ee83bd38e\") " Dec 12 14:23:38 crc kubenswrapper[5108]: I1212 14:23:38.088351 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba7bd770-9185-40a3-ae63-961ee83bd38e-catalog-content\") pod \"ba7bd770-9185-40a3-ae63-961ee83bd38e\" (UID: \"ba7bd770-9185-40a3-ae63-961ee83bd38e\") " Dec 12 14:23:38 crc kubenswrapper[5108]: I1212 14:23:38.089768 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba7bd770-9185-40a3-ae63-961ee83bd38e-utilities" (OuterVolumeSpecName: "utilities") pod "ba7bd770-9185-40a3-ae63-961ee83bd38e" (UID: "ba7bd770-9185-40a3-ae63-961ee83bd38e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:23:38 crc kubenswrapper[5108]: I1212 14:23:38.096297 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba7bd770-9185-40a3-ae63-961ee83bd38e-kube-api-access-ljvc6" (OuterVolumeSpecName: "kube-api-access-ljvc6") pod "ba7bd770-9185-40a3-ae63-961ee83bd38e" (UID: "ba7bd770-9185-40a3-ae63-961ee83bd38e"). InnerVolumeSpecName "kube-api-access-ljvc6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:23:38 crc kubenswrapper[5108]: I1212 14:23:38.100704 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba7bd770-9185-40a3-ae63-961ee83bd38e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ba7bd770-9185-40a3-ae63-961ee83bd38e" (UID: "ba7bd770-9185-40a3-ae63-961ee83bd38e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:23:38 crc kubenswrapper[5108]: I1212 14:23:38.189521 5108 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba7bd770-9185-40a3-ae63-961ee83bd38e-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 14:23:38 crc kubenswrapper[5108]: I1212 14:23:38.189564 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ljvc6\" (UniqueName: \"kubernetes.io/projected/ba7bd770-9185-40a3-ae63-961ee83bd38e-kube-api-access-ljvc6\") on node \"crc\" DevicePath \"\"" Dec 12 14:23:38 crc kubenswrapper[5108]: I1212 14:23:38.189579 5108 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba7bd770-9185-40a3-ae63-961ee83bd38e-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 14:23:38 crc kubenswrapper[5108]: I1212 14:23:38.436233 5108 generic.go:358] "Generic (PLEG): container finished" podID="ba7bd770-9185-40a3-ae63-961ee83bd38e" containerID="9cdd46b794513151581bbd5c8662d1ea1f953faa845143fabe20e1b42821df42" exitCode=0 Dec 12 14:23:38 crc kubenswrapper[5108]: I1212 14:23:38.436323 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mkdxn" event={"ID":"ba7bd770-9185-40a3-ae63-961ee83bd38e","Type":"ContainerDied","Data":"9cdd46b794513151581bbd5c8662d1ea1f953faa845143fabe20e1b42821df42"} Dec 12 14:23:38 crc kubenswrapper[5108]: I1212 14:23:38.436395 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mkdxn" Dec 12 14:23:38 crc kubenswrapper[5108]: I1212 14:23:38.436420 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mkdxn" event={"ID":"ba7bd770-9185-40a3-ae63-961ee83bd38e","Type":"ContainerDied","Data":"2cdfa6f370422d34de92305a518718bc8ea1ecf69501e2b93a501bf93e7b0fc5"} Dec 12 14:23:38 crc kubenswrapper[5108]: I1212 14:23:38.436454 5108 scope.go:117] "RemoveContainer" containerID="9cdd46b794513151581bbd5c8662d1ea1f953faa845143fabe20e1b42821df42" Dec 12 14:23:38 crc kubenswrapper[5108]: I1212 14:23:38.457787 5108 scope.go:117] "RemoveContainer" containerID="4bb3fc51896ffda51a2adb0138055a344158f932a7984c30dadec347e43bf5a6" Dec 12 14:23:38 crc kubenswrapper[5108]: I1212 14:23:38.479096 5108 scope.go:117] "RemoveContainer" containerID="fbc9b40f785f101174e0264beb3fd50817068750fa28f471d00b36b12177496b" Dec 12 14:23:38 crc kubenswrapper[5108]: I1212 14:23:38.494850 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mkdxn"] Dec 12 14:23:38 crc kubenswrapper[5108]: I1212 14:23:38.501347 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-mkdxn"] Dec 12 14:23:38 crc kubenswrapper[5108]: I1212 14:23:38.509523 5108 scope.go:117] "RemoveContainer" containerID="9cdd46b794513151581bbd5c8662d1ea1f953faa845143fabe20e1b42821df42" Dec 12 14:23:38 crc kubenswrapper[5108]: E1212 14:23:38.510238 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9cdd46b794513151581bbd5c8662d1ea1f953faa845143fabe20e1b42821df42\": container with ID starting with 9cdd46b794513151581bbd5c8662d1ea1f953faa845143fabe20e1b42821df42 not found: ID does not exist" containerID="9cdd46b794513151581bbd5c8662d1ea1f953faa845143fabe20e1b42821df42" Dec 12 14:23:38 crc kubenswrapper[5108]: I1212 14:23:38.510271 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9cdd46b794513151581bbd5c8662d1ea1f953faa845143fabe20e1b42821df42"} err="failed to get container status \"9cdd46b794513151581bbd5c8662d1ea1f953faa845143fabe20e1b42821df42\": rpc error: code = NotFound desc = could not find container \"9cdd46b794513151581bbd5c8662d1ea1f953faa845143fabe20e1b42821df42\": container with ID starting with 9cdd46b794513151581bbd5c8662d1ea1f953faa845143fabe20e1b42821df42 not found: ID does not exist" Dec 12 14:23:38 crc kubenswrapper[5108]: I1212 14:23:38.510297 5108 scope.go:117] "RemoveContainer" containerID="4bb3fc51896ffda51a2adb0138055a344158f932a7984c30dadec347e43bf5a6" Dec 12 14:23:38 crc kubenswrapper[5108]: E1212 14:23:38.510772 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4bb3fc51896ffda51a2adb0138055a344158f932a7984c30dadec347e43bf5a6\": container with ID starting with 4bb3fc51896ffda51a2adb0138055a344158f932a7984c30dadec347e43bf5a6 not found: ID does not exist" containerID="4bb3fc51896ffda51a2adb0138055a344158f932a7984c30dadec347e43bf5a6" Dec 12 14:23:38 crc kubenswrapper[5108]: I1212 14:23:38.510869 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4bb3fc51896ffda51a2adb0138055a344158f932a7984c30dadec347e43bf5a6"} err="failed to get container status \"4bb3fc51896ffda51a2adb0138055a344158f932a7984c30dadec347e43bf5a6\": rpc error: code = NotFound desc = could not find container \"4bb3fc51896ffda51a2adb0138055a344158f932a7984c30dadec347e43bf5a6\": container with ID starting with 4bb3fc51896ffda51a2adb0138055a344158f932a7984c30dadec347e43bf5a6 not found: ID does not exist" Dec 12 14:23:38 crc kubenswrapper[5108]: I1212 14:23:38.510956 5108 scope.go:117] "RemoveContainer" containerID="fbc9b40f785f101174e0264beb3fd50817068750fa28f471d00b36b12177496b" Dec 12 14:23:38 crc kubenswrapper[5108]: E1212 14:23:38.511994 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fbc9b40f785f101174e0264beb3fd50817068750fa28f471d00b36b12177496b\": container with ID starting with fbc9b40f785f101174e0264beb3fd50817068750fa28f471d00b36b12177496b not found: ID does not exist" containerID="fbc9b40f785f101174e0264beb3fd50817068750fa28f471d00b36b12177496b" Dec 12 14:23:38 crc kubenswrapper[5108]: I1212 14:23:38.512065 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fbc9b40f785f101174e0264beb3fd50817068750fa28f471d00b36b12177496b"} err="failed to get container status \"fbc9b40f785f101174e0264beb3fd50817068750fa28f471d00b36b12177496b\": rpc error: code = NotFound desc = could not find container \"fbc9b40f785f101174e0264beb3fd50817068750fa28f471d00b36b12177496b\": container with ID starting with fbc9b40f785f101174e0264beb3fd50817068750fa28f471d00b36b12177496b not found: ID does not exist" Dec 12 14:23:39 crc kubenswrapper[5108]: I1212 14:23:39.416996 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba7bd770-9185-40a3-ae63-961ee83bd38e" path="/var/lib/kubelet/pods/ba7bd770-9185-40a3-ae63-961ee83bd38e/volumes" Dec 12 14:23:39 crc kubenswrapper[5108]: I1212 14:23:39.455502 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-cn47m"] Dec 12 14:23:39 crc kubenswrapper[5108]: I1212 14:23:39.457113 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ba7bd770-9185-40a3-ae63-961ee83bd38e" containerName="extract-utilities" Dec 12 14:23:39 crc kubenswrapper[5108]: I1212 14:23:39.457236 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba7bd770-9185-40a3-ae63-961ee83bd38e" containerName="extract-utilities" Dec 12 14:23:39 crc kubenswrapper[5108]: I1212 14:23:39.457394 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ba7bd770-9185-40a3-ae63-961ee83bd38e" containerName="extract-content" Dec 12 14:23:39 crc kubenswrapper[5108]: I1212 14:23:39.457477 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba7bd770-9185-40a3-ae63-961ee83bd38e" containerName="extract-content" Dec 12 14:23:39 crc kubenswrapper[5108]: I1212 14:23:39.457610 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ba7bd770-9185-40a3-ae63-961ee83bd38e" containerName="registry-server" Dec 12 14:23:39 crc kubenswrapper[5108]: I1212 14:23:39.457694 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba7bd770-9185-40a3-ae63-961ee83bd38e" containerName="registry-server" Dec 12 14:23:39 crc kubenswrapper[5108]: I1212 14:23:39.458061 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="ba7bd770-9185-40a3-ae63-961ee83bd38e" containerName="registry-server" Dec 12 14:23:39 crc kubenswrapper[5108]: I1212 14:23:39.489149 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-cn47m"] Dec 12 14:23:39 crc kubenswrapper[5108]: I1212 14:23:39.489455 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-5d9d95bf5b-cn47m" Dec 12 14:23:39 crc kubenswrapper[5108]: I1212 14:23:39.611717 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ec5b8b83-3a2f-410f-8a04-ceeec94983b7-trusted-ca\") pod \"image-registry-5d9d95bf5b-cn47m\" (UID: \"ec5b8b83-3a2f-410f-8a04-ceeec94983b7\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-cn47m" Dec 12 14:23:39 crc kubenswrapper[5108]: I1212 14:23:39.612293 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-5d9d95bf5b-cn47m\" (UID: \"ec5b8b83-3a2f-410f-8a04-ceeec94983b7\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-cn47m" Dec 12 14:23:39 crc kubenswrapper[5108]: I1212 14:23:39.612440 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ec5b8b83-3a2f-410f-8a04-ceeec94983b7-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-cn47m\" (UID: \"ec5b8b83-3a2f-410f-8a04-ceeec94983b7\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-cn47m" Dec 12 14:23:39 crc kubenswrapper[5108]: I1212 14:23:39.612548 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntqhv\" (UniqueName: \"kubernetes.io/projected/ec5b8b83-3a2f-410f-8a04-ceeec94983b7-kube-api-access-ntqhv\") pod \"image-registry-5d9d95bf5b-cn47m\" (UID: \"ec5b8b83-3a2f-410f-8a04-ceeec94983b7\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-cn47m" Dec 12 14:23:39 crc kubenswrapper[5108]: I1212 14:23:39.612675 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ec5b8b83-3a2f-410f-8a04-ceeec94983b7-registry-tls\") pod \"image-registry-5d9d95bf5b-cn47m\" (UID: \"ec5b8b83-3a2f-410f-8a04-ceeec94983b7\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-cn47m" Dec 12 14:23:39 crc kubenswrapper[5108]: I1212 14:23:39.612781 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ec5b8b83-3a2f-410f-8a04-ceeec94983b7-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-cn47m\" (UID: \"ec5b8b83-3a2f-410f-8a04-ceeec94983b7\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-cn47m" Dec 12 14:23:39 crc kubenswrapper[5108]: I1212 14:23:39.612872 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ec5b8b83-3a2f-410f-8a04-ceeec94983b7-registry-certificates\") pod \"image-registry-5d9d95bf5b-cn47m\" (UID: \"ec5b8b83-3a2f-410f-8a04-ceeec94983b7\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-cn47m" Dec 12 14:23:39 crc kubenswrapper[5108]: I1212 14:23:39.612964 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ec5b8b83-3a2f-410f-8a04-ceeec94983b7-bound-sa-token\") pod \"image-registry-5d9d95bf5b-cn47m\" (UID: \"ec5b8b83-3a2f-410f-8a04-ceeec94983b7\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-cn47m" Dec 12 14:23:39 crc kubenswrapper[5108]: I1212 14:23:39.642647 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-5d9d95bf5b-cn47m\" (UID: \"ec5b8b83-3a2f-410f-8a04-ceeec94983b7\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-cn47m" Dec 12 14:23:39 crc kubenswrapper[5108]: I1212 14:23:39.713927 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ec5b8b83-3a2f-410f-8a04-ceeec94983b7-registry-certificates\") pod \"image-registry-5d9d95bf5b-cn47m\" (UID: \"ec5b8b83-3a2f-410f-8a04-ceeec94983b7\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-cn47m" Dec 12 14:23:39 crc kubenswrapper[5108]: I1212 14:23:39.714015 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ec5b8b83-3a2f-410f-8a04-ceeec94983b7-bound-sa-token\") pod \"image-registry-5d9d95bf5b-cn47m\" (UID: \"ec5b8b83-3a2f-410f-8a04-ceeec94983b7\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-cn47m" Dec 12 14:23:39 crc kubenswrapper[5108]: I1212 14:23:39.714055 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ec5b8b83-3a2f-410f-8a04-ceeec94983b7-trusted-ca\") pod \"image-registry-5d9d95bf5b-cn47m\" (UID: \"ec5b8b83-3a2f-410f-8a04-ceeec94983b7\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-cn47m" Dec 12 14:23:39 crc kubenswrapper[5108]: I1212 14:23:39.714106 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ec5b8b83-3a2f-410f-8a04-ceeec94983b7-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-cn47m\" (UID: \"ec5b8b83-3a2f-410f-8a04-ceeec94983b7\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-cn47m" Dec 12 14:23:39 crc kubenswrapper[5108]: I1212 14:23:39.714126 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ntqhv\" (UniqueName: \"kubernetes.io/projected/ec5b8b83-3a2f-410f-8a04-ceeec94983b7-kube-api-access-ntqhv\") pod \"image-registry-5d9d95bf5b-cn47m\" (UID: \"ec5b8b83-3a2f-410f-8a04-ceeec94983b7\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-cn47m" Dec 12 14:23:39 crc kubenswrapper[5108]: I1212 14:23:39.714184 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ec5b8b83-3a2f-410f-8a04-ceeec94983b7-registry-tls\") pod \"image-registry-5d9d95bf5b-cn47m\" (UID: \"ec5b8b83-3a2f-410f-8a04-ceeec94983b7\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-cn47m" Dec 12 14:23:39 crc kubenswrapper[5108]: I1212 14:23:39.714215 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ec5b8b83-3a2f-410f-8a04-ceeec94983b7-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-cn47m\" (UID: \"ec5b8b83-3a2f-410f-8a04-ceeec94983b7\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-cn47m" Dec 12 14:23:39 crc kubenswrapper[5108]: I1212 14:23:39.715008 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ec5b8b83-3a2f-410f-8a04-ceeec94983b7-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-cn47m\" (UID: \"ec5b8b83-3a2f-410f-8a04-ceeec94983b7\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-cn47m" Dec 12 14:23:39 crc kubenswrapper[5108]: I1212 14:23:39.715283 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ec5b8b83-3a2f-410f-8a04-ceeec94983b7-trusted-ca\") pod \"image-registry-5d9d95bf5b-cn47m\" (UID: \"ec5b8b83-3a2f-410f-8a04-ceeec94983b7\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-cn47m" Dec 12 14:23:39 crc kubenswrapper[5108]: I1212 14:23:39.715364 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ec5b8b83-3a2f-410f-8a04-ceeec94983b7-registry-certificates\") pod \"image-registry-5d9d95bf5b-cn47m\" (UID: \"ec5b8b83-3a2f-410f-8a04-ceeec94983b7\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-cn47m" Dec 12 14:23:39 crc kubenswrapper[5108]: I1212 14:23:39.724202 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ec5b8b83-3a2f-410f-8a04-ceeec94983b7-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-cn47m\" (UID: \"ec5b8b83-3a2f-410f-8a04-ceeec94983b7\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-cn47m" Dec 12 14:23:39 crc kubenswrapper[5108]: I1212 14:23:39.724351 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ec5b8b83-3a2f-410f-8a04-ceeec94983b7-registry-tls\") pod \"image-registry-5d9d95bf5b-cn47m\" (UID: \"ec5b8b83-3a2f-410f-8a04-ceeec94983b7\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-cn47m" Dec 12 14:23:39 crc kubenswrapper[5108]: I1212 14:23:39.739636 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ec5b8b83-3a2f-410f-8a04-ceeec94983b7-bound-sa-token\") pod \"image-registry-5d9d95bf5b-cn47m\" (UID: \"ec5b8b83-3a2f-410f-8a04-ceeec94983b7\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-cn47m" Dec 12 14:23:39 crc kubenswrapper[5108]: I1212 14:23:39.740055 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntqhv\" (UniqueName: \"kubernetes.io/projected/ec5b8b83-3a2f-410f-8a04-ceeec94983b7-kube-api-access-ntqhv\") pod \"image-registry-5d9d95bf5b-cn47m\" (UID: \"ec5b8b83-3a2f-410f-8a04-ceeec94983b7\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-cn47m" Dec 12 14:23:39 crc kubenswrapper[5108]: I1212 14:23:39.809266 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-5d9d95bf5b-cn47m" Dec 12 14:23:40 crc kubenswrapper[5108]: I1212 14:23:40.009528 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-cn47m"] Dec 12 14:23:40 crc kubenswrapper[5108]: I1212 14:23:40.474443 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-5d9d95bf5b-cn47m" event={"ID":"ec5b8b83-3a2f-410f-8a04-ceeec94983b7","Type":"ContainerStarted","Data":"087527ba046620cc10ea59352637c941b8106e55bd1b1f2535a3b05a1b84d2c7"} Dec 12 14:23:40 crc kubenswrapper[5108]: I1212 14:23:40.474861 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-image-registry/image-registry-5d9d95bf5b-cn47m" Dec 12 14:23:40 crc kubenswrapper[5108]: I1212 14:23:40.474878 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-5d9d95bf5b-cn47m" event={"ID":"ec5b8b83-3a2f-410f-8a04-ceeec94983b7","Type":"ContainerStarted","Data":"ae2cf688ce0cf093a3c5c2d239f31b9d0e1167b3731e3591327b992beda4755e"} Dec 12 14:23:40 crc kubenswrapper[5108]: I1212 14:23:40.495466 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-5d9d95bf5b-cn47m" podStartSLOduration=1.495448397 podStartE2EDuration="1.495448397s" podCreationTimestamp="2025-12-12 14:23:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 14:23:40.494634056 +0000 UTC m=+773.402625225" watchObservedRunningTime="2025-12-12 14:23:40.495448397 +0000 UTC m=+773.403439556" Dec 12 14:23:42 crc kubenswrapper[5108]: I1212 14:23:42.011967 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210k2fg5"] Dec 12 14:23:42 crc kubenswrapper[5108]: I1212 14:23:42.021323 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210k2fg5" Dec 12 14:23:42 crc kubenswrapper[5108]: I1212 14:23:42.023756 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"default-dockercfg-b2ccr\"" Dec 12 14:23:42 crc kubenswrapper[5108]: I1212 14:23:42.025354 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210k2fg5"] Dec 12 14:23:42 crc kubenswrapper[5108]: I1212 14:23:42.042513 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96gqd\" (UniqueName: \"kubernetes.io/projected/b15535f9-8945-4b81-80e4-ae5e00046212-kube-api-access-96gqd\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210k2fg5\" (UID: \"b15535f9-8945-4b81-80e4-ae5e00046212\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210k2fg5" Dec 12 14:23:42 crc kubenswrapper[5108]: I1212 14:23:42.042576 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b15535f9-8945-4b81-80e4-ae5e00046212-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210k2fg5\" (UID: \"b15535f9-8945-4b81-80e4-ae5e00046212\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210k2fg5" Dec 12 14:23:42 crc kubenswrapper[5108]: I1212 14:23:42.042714 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b15535f9-8945-4b81-80e4-ae5e00046212-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210k2fg5\" (UID: \"b15535f9-8945-4b81-80e4-ae5e00046212\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210k2fg5" Dec 12 14:23:42 crc kubenswrapper[5108]: I1212 14:23:42.143484 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-96gqd\" (UniqueName: \"kubernetes.io/projected/b15535f9-8945-4b81-80e4-ae5e00046212-kube-api-access-96gqd\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210k2fg5\" (UID: \"b15535f9-8945-4b81-80e4-ae5e00046212\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210k2fg5" Dec 12 14:23:42 crc kubenswrapper[5108]: I1212 14:23:42.143565 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b15535f9-8945-4b81-80e4-ae5e00046212-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210k2fg5\" (UID: \"b15535f9-8945-4b81-80e4-ae5e00046212\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210k2fg5" Dec 12 14:23:42 crc kubenswrapper[5108]: I1212 14:23:42.143652 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b15535f9-8945-4b81-80e4-ae5e00046212-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210k2fg5\" (UID: \"b15535f9-8945-4b81-80e4-ae5e00046212\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210k2fg5" Dec 12 14:23:42 crc kubenswrapper[5108]: I1212 14:23:42.144228 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b15535f9-8945-4b81-80e4-ae5e00046212-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210k2fg5\" (UID: \"b15535f9-8945-4b81-80e4-ae5e00046212\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210k2fg5" Dec 12 14:23:42 crc kubenswrapper[5108]: I1212 14:23:42.144341 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b15535f9-8945-4b81-80e4-ae5e00046212-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210k2fg5\" (UID: \"b15535f9-8945-4b81-80e4-ae5e00046212\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210k2fg5" Dec 12 14:23:42 crc kubenswrapper[5108]: I1212 14:23:42.165668 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-96gqd\" (UniqueName: \"kubernetes.io/projected/b15535f9-8945-4b81-80e4-ae5e00046212-kube-api-access-96gqd\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210k2fg5\" (UID: \"b15535f9-8945-4b81-80e4-ae5e00046212\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210k2fg5" Dec 12 14:23:42 crc kubenswrapper[5108]: I1212 14:23:42.341548 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210k2fg5" Dec 12 14:23:42 crc kubenswrapper[5108]: I1212 14:23:42.583865 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210k2fg5"] Dec 12 14:23:42 crc kubenswrapper[5108]: W1212 14:23:42.594673 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb15535f9_8945_4b81_80e4_ae5e00046212.slice/crio-c3ff669cb4ff11e826c9074d4cf0db9a98756cf9facae12823bb1d3c95370713 WatchSource:0}: Error finding container c3ff669cb4ff11e826c9074d4cf0db9a98756cf9facae12823bb1d3c95370713: Status 404 returned error can't find the container with id c3ff669cb4ff11e826c9074d4cf0db9a98756cf9facae12823bb1d3c95370713 Dec 12 14:23:43 crc kubenswrapper[5108]: I1212 14:23:43.498858 5108 generic.go:358] "Generic (PLEG): container finished" podID="b15535f9-8945-4b81-80e4-ae5e00046212" containerID="3c44e1688d12f0f12aaf032f053f87f216cdde5ccf8a43cf38186c8c2a9be776" exitCode=0 Dec 12 14:23:43 crc kubenswrapper[5108]: I1212 14:23:43.498905 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210k2fg5" event={"ID":"b15535f9-8945-4b81-80e4-ae5e00046212","Type":"ContainerDied","Data":"3c44e1688d12f0f12aaf032f053f87f216cdde5ccf8a43cf38186c8c2a9be776"} Dec 12 14:23:43 crc kubenswrapper[5108]: I1212 14:23:43.498956 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210k2fg5" event={"ID":"b15535f9-8945-4b81-80e4-ae5e00046212","Type":"ContainerStarted","Data":"c3ff669cb4ff11e826c9074d4cf0db9a98756cf9facae12823bb1d3c95370713"} Dec 12 14:23:44 crc kubenswrapper[5108]: I1212 14:23:44.759881 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-f8rcq"] Dec 12 14:23:44 crc kubenswrapper[5108]: I1212 14:23:44.768777 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f8rcq" Dec 12 14:23:44 crc kubenswrapper[5108]: I1212 14:23:44.777188 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-f8rcq"] Dec 12 14:23:44 crc kubenswrapper[5108]: I1212 14:23:44.880584 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7789c133-837f-462e-baa0-f0156eb61ede-catalog-content\") pod \"redhat-operators-f8rcq\" (UID: \"7789c133-837f-462e-baa0-f0156eb61ede\") " pod="openshift-marketplace/redhat-operators-f8rcq" Dec 12 14:23:44 crc kubenswrapper[5108]: I1212 14:23:44.880654 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7789c133-837f-462e-baa0-f0156eb61ede-utilities\") pod \"redhat-operators-f8rcq\" (UID: \"7789c133-837f-462e-baa0-f0156eb61ede\") " pod="openshift-marketplace/redhat-operators-f8rcq" Dec 12 14:23:44 crc kubenswrapper[5108]: I1212 14:23:44.880678 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gr94x\" (UniqueName: \"kubernetes.io/projected/7789c133-837f-462e-baa0-f0156eb61ede-kube-api-access-gr94x\") pod \"redhat-operators-f8rcq\" (UID: \"7789c133-837f-462e-baa0-f0156eb61ede\") " pod="openshift-marketplace/redhat-operators-f8rcq" Dec 12 14:23:44 crc kubenswrapper[5108]: I1212 14:23:44.982420 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7789c133-837f-462e-baa0-f0156eb61ede-catalog-content\") pod \"redhat-operators-f8rcq\" (UID: \"7789c133-837f-462e-baa0-f0156eb61ede\") " pod="openshift-marketplace/redhat-operators-f8rcq" Dec 12 14:23:44 crc kubenswrapper[5108]: I1212 14:23:44.982494 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7789c133-837f-462e-baa0-f0156eb61ede-utilities\") pod \"redhat-operators-f8rcq\" (UID: \"7789c133-837f-462e-baa0-f0156eb61ede\") " pod="openshift-marketplace/redhat-operators-f8rcq" Dec 12 14:23:44 crc kubenswrapper[5108]: I1212 14:23:44.982516 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gr94x\" (UniqueName: \"kubernetes.io/projected/7789c133-837f-462e-baa0-f0156eb61ede-kube-api-access-gr94x\") pod \"redhat-operators-f8rcq\" (UID: \"7789c133-837f-462e-baa0-f0156eb61ede\") " pod="openshift-marketplace/redhat-operators-f8rcq" Dec 12 14:23:44 crc kubenswrapper[5108]: I1212 14:23:44.983233 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7789c133-837f-462e-baa0-f0156eb61ede-catalog-content\") pod \"redhat-operators-f8rcq\" (UID: \"7789c133-837f-462e-baa0-f0156eb61ede\") " pod="openshift-marketplace/redhat-operators-f8rcq" Dec 12 14:23:44 crc kubenswrapper[5108]: I1212 14:23:44.983261 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7789c133-837f-462e-baa0-f0156eb61ede-utilities\") pod \"redhat-operators-f8rcq\" (UID: \"7789c133-837f-462e-baa0-f0156eb61ede\") " pod="openshift-marketplace/redhat-operators-f8rcq" Dec 12 14:23:45 crc kubenswrapper[5108]: I1212 14:23:45.010472 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gr94x\" (UniqueName: \"kubernetes.io/projected/7789c133-837f-462e-baa0-f0156eb61ede-kube-api-access-gr94x\") pod \"redhat-operators-f8rcq\" (UID: \"7789c133-837f-462e-baa0-f0156eb61ede\") " pod="openshift-marketplace/redhat-operators-f8rcq" Dec 12 14:23:45 crc kubenswrapper[5108]: I1212 14:23:45.134787 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f8rcq" Dec 12 14:23:45 crc kubenswrapper[5108]: I1212 14:23:45.339014 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-f8rcq"] Dec 12 14:23:45 crc kubenswrapper[5108]: I1212 14:23:45.510145 5108 generic.go:358] "Generic (PLEG): container finished" podID="b15535f9-8945-4b81-80e4-ae5e00046212" containerID="ff610c5a1b4c717b8c796820dcd0d55ce30387e91a6fc507a8ce37cd59af5ddb" exitCode=0 Dec 12 14:23:45 crc kubenswrapper[5108]: I1212 14:23:45.510244 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210k2fg5" event={"ID":"b15535f9-8945-4b81-80e4-ae5e00046212","Type":"ContainerDied","Data":"ff610c5a1b4c717b8c796820dcd0d55ce30387e91a6fc507a8ce37cd59af5ddb"} Dec 12 14:23:45 crc kubenswrapper[5108]: I1212 14:23:45.513393 5108 generic.go:358] "Generic (PLEG): container finished" podID="7789c133-837f-462e-baa0-f0156eb61ede" containerID="a8678a061b98d7b39e5c2daaad75b0a88b658eadf263c3097d820bd70c3cd6fd" exitCode=0 Dec 12 14:23:45 crc kubenswrapper[5108]: I1212 14:23:45.513489 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f8rcq" event={"ID":"7789c133-837f-462e-baa0-f0156eb61ede","Type":"ContainerDied","Data":"a8678a061b98d7b39e5c2daaad75b0a88b658eadf263c3097d820bd70c3cd6fd"} Dec 12 14:23:45 crc kubenswrapper[5108]: I1212 14:23:45.513521 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f8rcq" event={"ID":"7789c133-837f-462e-baa0-f0156eb61ede","Type":"ContainerStarted","Data":"b88446c12da22bb325deb4d9034bcf32be280549edeaaddc6dc0e2d6622d3e8c"} Dec 12 14:23:46 crc kubenswrapper[5108]: I1212 14:23:46.523337 5108 generic.go:358] "Generic (PLEG): container finished" podID="b15535f9-8945-4b81-80e4-ae5e00046212" containerID="674c7cc9c88900f33b8c5c30fb80991570d471fa2159e939ee2791d384c7f35b" exitCode=0 Dec 12 14:23:46 crc kubenswrapper[5108]: I1212 14:23:46.523495 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210k2fg5" event={"ID":"b15535f9-8945-4b81-80e4-ae5e00046212","Type":"ContainerDied","Data":"674c7cc9c88900f33b8c5c30fb80991570d471fa2159e939ee2791d384c7f35b"} Dec 12 14:23:47 crc kubenswrapper[5108]: I1212 14:23:47.532761 5108 generic.go:358] "Generic (PLEG): container finished" podID="7789c133-837f-462e-baa0-f0156eb61ede" containerID="6ae39a464d68ba7567712c73613f390a300da5db69ca220f162bd2f13b75052d" exitCode=0 Dec 12 14:23:47 crc kubenswrapper[5108]: I1212 14:23:47.532862 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f8rcq" event={"ID":"7789c133-837f-462e-baa0-f0156eb61ede","Type":"ContainerDied","Data":"6ae39a464d68ba7567712c73613f390a300da5db69ca220f162bd2f13b75052d"} Dec 12 14:23:47 crc kubenswrapper[5108]: I1212 14:23:47.774973 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210k2fg5" Dec 12 14:23:47 crc kubenswrapper[5108]: I1212 14:23:47.952455 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-96gqd\" (UniqueName: \"kubernetes.io/projected/b15535f9-8945-4b81-80e4-ae5e00046212-kube-api-access-96gqd\") pod \"b15535f9-8945-4b81-80e4-ae5e00046212\" (UID: \"b15535f9-8945-4b81-80e4-ae5e00046212\") " Dec 12 14:23:47 crc kubenswrapper[5108]: I1212 14:23:47.952570 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b15535f9-8945-4b81-80e4-ae5e00046212-bundle\") pod \"b15535f9-8945-4b81-80e4-ae5e00046212\" (UID: \"b15535f9-8945-4b81-80e4-ae5e00046212\") " Dec 12 14:23:47 crc kubenswrapper[5108]: I1212 14:23:47.952674 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b15535f9-8945-4b81-80e4-ae5e00046212-util\") pod \"b15535f9-8945-4b81-80e4-ae5e00046212\" (UID: \"b15535f9-8945-4b81-80e4-ae5e00046212\") " Dec 12 14:23:47 crc kubenswrapper[5108]: I1212 14:23:47.955020 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b15535f9-8945-4b81-80e4-ae5e00046212-bundle" (OuterVolumeSpecName: "bundle") pod "b15535f9-8945-4b81-80e4-ae5e00046212" (UID: "b15535f9-8945-4b81-80e4-ae5e00046212"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:23:47 crc kubenswrapper[5108]: I1212 14:23:47.959538 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b15535f9-8945-4b81-80e4-ae5e00046212-kube-api-access-96gqd" (OuterVolumeSpecName: "kube-api-access-96gqd") pod "b15535f9-8945-4b81-80e4-ae5e00046212" (UID: "b15535f9-8945-4b81-80e4-ae5e00046212"). InnerVolumeSpecName "kube-api-access-96gqd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:23:48 crc kubenswrapper[5108]: I1212 14:23:48.053775 5108 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b15535f9-8945-4b81-80e4-ae5e00046212-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 14:23:48 crc kubenswrapper[5108]: I1212 14:23:48.053814 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-96gqd\" (UniqueName: \"kubernetes.io/projected/b15535f9-8945-4b81-80e4-ae5e00046212-kube-api-access-96gqd\") on node \"crc\" DevicePath \"\"" Dec 12 14:23:48 crc kubenswrapper[5108]: I1212 14:23:48.544776 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210k2fg5" Dec 12 14:23:48 crc kubenswrapper[5108]: I1212 14:23:48.544768 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210k2fg5" event={"ID":"b15535f9-8945-4b81-80e4-ae5e00046212","Type":"ContainerDied","Data":"c3ff669cb4ff11e826c9074d4cf0db9a98756cf9facae12823bb1d3c95370713"} Dec 12 14:23:48 crc kubenswrapper[5108]: I1212 14:23:48.544940 5108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c3ff669cb4ff11e826c9074d4cf0db9a98756cf9facae12823bb1d3c95370713" Dec 12 14:23:48 crc kubenswrapper[5108]: I1212 14:23:48.620130 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b15535f9-8945-4b81-80e4-ae5e00046212-util" (OuterVolumeSpecName: "util") pod "b15535f9-8945-4b81-80e4-ae5e00046212" (UID: "b15535f9-8945-4b81-80e4-ae5e00046212"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:23:48 crc kubenswrapper[5108]: I1212 14:23:48.662478 5108 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b15535f9-8945-4b81-80e4-ae5e00046212-util\") on node \"crc\" DevicePath \"\"" Dec 12 14:23:49 crc kubenswrapper[5108]: I1212 14:23:49.553516 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f8rcq" event={"ID":"7789c133-837f-462e-baa0-f0156eb61ede","Type":"ContainerStarted","Data":"9cea248dc21c154b4e79e0aa9856288588b59a26ea253671ca1160b9cadff313"} Dec 12 14:23:49 crc kubenswrapper[5108]: I1212 14:23:49.569533 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-f8rcq" podStartSLOduration=4.664119766 podStartE2EDuration="5.569516295s" podCreationTimestamp="2025-12-12 14:23:44 +0000 UTC" firstStartedPulling="2025-12-12 14:23:45.514498968 +0000 UTC m=+778.422490127" lastFinishedPulling="2025-12-12 14:23:46.419895457 +0000 UTC m=+779.327886656" observedRunningTime="2025-12-12 14:23:49.568803547 +0000 UTC m=+782.476794716" watchObservedRunningTime="2025-12-12 14:23:49.569516295 +0000 UTC m=+782.477507454" Dec 12 14:23:49 crc kubenswrapper[5108]: I1212 14:23:49.603234 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5eqwf4q"] Dec 12 14:23:49 crc kubenswrapper[5108]: I1212 14:23:49.603767 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b15535f9-8945-4b81-80e4-ae5e00046212" containerName="pull" Dec 12 14:23:49 crc kubenswrapper[5108]: I1212 14:23:49.603780 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="b15535f9-8945-4b81-80e4-ae5e00046212" containerName="pull" Dec 12 14:23:49 crc kubenswrapper[5108]: I1212 14:23:49.603806 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b15535f9-8945-4b81-80e4-ae5e00046212" containerName="extract" Dec 12 14:23:49 crc kubenswrapper[5108]: I1212 14:23:49.603813 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="b15535f9-8945-4b81-80e4-ae5e00046212" containerName="extract" Dec 12 14:23:49 crc kubenswrapper[5108]: I1212 14:23:49.603821 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b15535f9-8945-4b81-80e4-ae5e00046212" containerName="util" Dec 12 14:23:49 crc kubenswrapper[5108]: I1212 14:23:49.603826 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="b15535f9-8945-4b81-80e4-ae5e00046212" containerName="util" Dec 12 14:23:49 crc kubenswrapper[5108]: I1212 14:23:49.603930 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="b15535f9-8945-4b81-80e4-ae5e00046212" containerName="extract" Dec 12 14:23:49 crc kubenswrapper[5108]: I1212 14:23:49.612432 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5eqwf4q" Dec 12 14:23:49 crc kubenswrapper[5108]: I1212 14:23:49.612896 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5eqwf4q"] Dec 12 14:23:49 crc kubenswrapper[5108]: I1212 14:23:49.615098 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"default-dockercfg-b2ccr\"" Dec 12 14:23:49 crc kubenswrapper[5108]: I1212 14:23:49.775753 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rf8l\" (UniqueName: \"kubernetes.io/projected/2264e822-154c-48fa-a2ba-0264faf1df18-kube-api-access-8rf8l\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5eqwf4q\" (UID: \"2264e822-154c-48fa-a2ba-0264faf1df18\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5eqwf4q" Dec 12 14:23:49 crc kubenswrapper[5108]: I1212 14:23:49.775815 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2264e822-154c-48fa-a2ba-0264faf1df18-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5eqwf4q\" (UID: \"2264e822-154c-48fa-a2ba-0264faf1df18\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5eqwf4q" Dec 12 14:23:49 crc kubenswrapper[5108]: I1212 14:23:49.775842 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2264e822-154c-48fa-a2ba-0264faf1df18-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5eqwf4q\" (UID: \"2264e822-154c-48fa-a2ba-0264faf1df18\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5eqwf4q" Dec 12 14:23:49 crc kubenswrapper[5108]: I1212 14:23:49.876761 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2264e822-154c-48fa-a2ba-0264faf1df18-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5eqwf4q\" (UID: \"2264e822-154c-48fa-a2ba-0264faf1df18\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5eqwf4q" Dec 12 14:23:49 crc kubenswrapper[5108]: I1212 14:23:49.876870 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8rf8l\" (UniqueName: \"kubernetes.io/projected/2264e822-154c-48fa-a2ba-0264faf1df18-kube-api-access-8rf8l\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5eqwf4q\" (UID: \"2264e822-154c-48fa-a2ba-0264faf1df18\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5eqwf4q" Dec 12 14:23:49 crc kubenswrapper[5108]: I1212 14:23:49.876903 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2264e822-154c-48fa-a2ba-0264faf1df18-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5eqwf4q\" (UID: \"2264e822-154c-48fa-a2ba-0264faf1df18\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5eqwf4q" Dec 12 14:23:49 crc kubenswrapper[5108]: I1212 14:23:49.877367 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2264e822-154c-48fa-a2ba-0264faf1df18-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5eqwf4q\" (UID: \"2264e822-154c-48fa-a2ba-0264faf1df18\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5eqwf4q" Dec 12 14:23:49 crc kubenswrapper[5108]: I1212 14:23:49.877583 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2264e822-154c-48fa-a2ba-0264faf1df18-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5eqwf4q\" (UID: \"2264e822-154c-48fa-a2ba-0264faf1df18\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5eqwf4q" Dec 12 14:23:49 crc kubenswrapper[5108]: I1212 14:23:49.907329 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rf8l\" (UniqueName: \"kubernetes.io/projected/2264e822-154c-48fa-a2ba-0264faf1df18-kube-api-access-8rf8l\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5eqwf4q\" (UID: \"2264e822-154c-48fa-a2ba-0264faf1df18\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5eqwf4q" Dec 12 14:23:49 crc kubenswrapper[5108]: I1212 14:23:49.927594 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5eqwf4q" Dec 12 14:23:50 crc kubenswrapper[5108]: I1212 14:23:50.137493 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5eqwf4q"] Dec 12 14:23:50 crc kubenswrapper[5108]: I1212 14:23:50.562603 5108 generic.go:358] "Generic (PLEG): container finished" podID="2264e822-154c-48fa-a2ba-0264faf1df18" containerID="ce17688bbe5bd628100308a36f71c4563f5582cb6654edf087cf64f6861397c9" exitCode=0 Dec 12 14:23:50 crc kubenswrapper[5108]: I1212 14:23:50.562764 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5eqwf4q" event={"ID":"2264e822-154c-48fa-a2ba-0264faf1df18","Type":"ContainerDied","Data":"ce17688bbe5bd628100308a36f71c4563f5582cb6654edf087cf64f6861397c9"} Dec 12 14:23:50 crc kubenswrapper[5108]: I1212 14:23:50.562840 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5eqwf4q" event={"ID":"2264e822-154c-48fa-a2ba-0264faf1df18","Type":"ContainerStarted","Data":"882e8e24f954b86fd7f50f45146b0aeb748ac91967e2a16f32ef29516f607366"} Dec 12 14:23:54 crc kubenswrapper[5108]: I1212 14:23:54.168389 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-82wgn"] Dec 12 14:23:54 crc kubenswrapper[5108]: I1212 14:23:54.173606 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-82wgn" Dec 12 14:23:54 crc kubenswrapper[5108]: I1212 14:23:54.184030 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-82wgn"] Dec 12 14:23:54 crc kubenswrapper[5108]: I1212 14:23:54.242957 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56072627-b825-4a82-a2bb-d7af04666390-catalog-content\") pod \"certified-operators-82wgn\" (UID: \"56072627-b825-4a82-a2bb-d7af04666390\") " pod="openshift-marketplace/certified-operators-82wgn" Dec 12 14:23:54 crc kubenswrapper[5108]: I1212 14:23:54.243050 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56072627-b825-4a82-a2bb-d7af04666390-utilities\") pod \"certified-operators-82wgn\" (UID: \"56072627-b825-4a82-a2bb-d7af04666390\") " pod="openshift-marketplace/certified-operators-82wgn" Dec 12 14:23:54 crc kubenswrapper[5108]: I1212 14:23:54.243098 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcrsw\" (UniqueName: \"kubernetes.io/projected/56072627-b825-4a82-a2bb-d7af04666390-kube-api-access-kcrsw\") pod \"certified-operators-82wgn\" (UID: \"56072627-b825-4a82-a2bb-d7af04666390\") " pod="openshift-marketplace/certified-operators-82wgn" Dec 12 14:23:54 crc kubenswrapper[5108]: I1212 14:23:54.344493 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56072627-b825-4a82-a2bb-d7af04666390-catalog-content\") pod \"certified-operators-82wgn\" (UID: \"56072627-b825-4a82-a2bb-d7af04666390\") " pod="openshift-marketplace/certified-operators-82wgn" Dec 12 14:23:54 crc kubenswrapper[5108]: I1212 14:23:54.345189 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56072627-b825-4a82-a2bb-d7af04666390-catalog-content\") pod \"certified-operators-82wgn\" (UID: \"56072627-b825-4a82-a2bb-d7af04666390\") " pod="openshift-marketplace/certified-operators-82wgn" Dec 12 14:23:54 crc kubenswrapper[5108]: I1212 14:23:54.345276 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56072627-b825-4a82-a2bb-d7af04666390-utilities\") pod \"certified-operators-82wgn\" (UID: \"56072627-b825-4a82-a2bb-d7af04666390\") " pod="openshift-marketplace/certified-operators-82wgn" Dec 12 14:23:54 crc kubenswrapper[5108]: I1212 14:23:54.345328 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-kcrsw\" (UniqueName: \"kubernetes.io/projected/56072627-b825-4a82-a2bb-d7af04666390-kube-api-access-kcrsw\") pod \"certified-operators-82wgn\" (UID: \"56072627-b825-4a82-a2bb-d7af04666390\") " pod="openshift-marketplace/certified-operators-82wgn" Dec 12 14:23:54 crc kubenswrapper[5108]: I1212 14:23:54.346182 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56072627-b825-4a82-a2bb-d7af04666390-utilities\") pod \"certified-operators-82wgn\" (UID: \"56072627-b825-4a82-a2bb-d7af04666390\") " pod="openshift-marketplace/certified-operators-82wgn" Dec 12 14:23:54 crc kubenswrapper[5108]: I1212 14:23:54.384570 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-kcrsw\" (UniqueName: \"kubernetes.io/projected/56072627-b825-4a82-a2bb-d7af04666390-kube-api-access-kcrsw\") pod \"certified-operators-82wgn\" (UID: \"56072627-b825-4a82-a2bb-d7af04666390\") " pod="openshift-marketplace/certified-operators-82wgn" Dec 12 14:23:54 crc kubenswrapper[5108]: I1212 14:23:54.526537 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-82wgn" Dec 12 14:23:54 crc kubenswrapper[5108]: I1212 14:23:54.607586 5108 generic.go:358] "Generic (PLEG): container finished" podID="2264e822-154c-48fa-a2ba-0264faf1df18" containerID="2fd5a0f40eca5dbd82f768d166c548d632b5a5a2330f874c0827a21f66279da9" exitCode=0 Dec 12 14:23:54 crc kubenswrapper[5108]: I1212 14:23:54.607661 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5eqwf4q" event={"ID":"2264e822-154c-48fa-a2ba-0264faf1df18","Type":"ContainerDied","Data":"2fd5a0f40eca5dbd82f768d166c548d632b5a5a2330f874c0827a21f66279da9"} Dec 12 14:23:55 crc kubenswrapper[5108]: I1212 14:23:55.093014 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-82wgn"] Dec 12 14:23:55 crc kubenswrapper[5108]: I1212 14:23:55.134919 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-f8rcq" Dec 12 14:23:55 crc kubenswrapper[5108]: I1212 14:23:55.135283 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-f8rcq" Dec 12 14:23:55 crc kubenswrapper[5108]: I1212 14:23:55.189250 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-f8rcq" Dec 12 14:23:55 crc kubenswrapper[5108]: I1212 14:23:55.615493 5108 generic.go:358] "Generic (PLEG): container finished" podID="2264e822-154c-48fa-a2ba-0264faf1df18" containerID="594f82963e3dc0568740b4976fe475fd006c8e8c4f0ff0c61506f2f43ec7cfb0" exitCode=0 Dec 12 14:23:55 crc kubenswrapper[5108]: I1212 14:23:55.615544 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5eqwf4q" event={"ID":"2264e822-154c-48fa-a2ba-0264faf1df18","Type":"ContainerDied","Data":"594f82963e3dc0568740b4976fe475fd006c8e8c4f0ff0c61506f2f43ec7cfb0"} Dec 12 14:23:55 crc kubenswrapper[5108]: I1212 14:23:55.617226 5108 generic.go:358] "Generic (PLEG): container finished" podID="56072627-b825-4a82-a2bb-d7af04666390" containerID="b82f5dfc4fb48cec54990a3ff5b247f1946f48e4c65542b875d1eda7a035315e" exitCode=0 Dec 12 14:23:55 crc kubenswrapper[5108]: I1212 14:23:55.617435 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-82wgn" event={"ID":"56072627-b825-4a82-a2bb-d7af04666390","Type":"ContainerDied","Data":"b82f5dfc4fb48cec54990a3ff5b247f1946f48e4c65542b875d1eda7a035315e"} Dec 12 14:23:55 crc kubenswrapper[5108]: I1212 14:23:55.617476 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-82wgn" event={"ID":"56072627-b825-4a82-a2bb-d7af04666390","Type":"ContainerStarted","Data":"7d8c109690c84091e1c01e0ac23cedad9b0a56baa5ebea7f715ef1629838663e"} Dec 12 14:23:55 crc kubenswrapper[5108]: I1212 14:23:55.694595 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-f8rcq" Dec 12 14:23:55 crc kubenswrapper[5108]: I1212 14:23:55.807216 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931albknz"] Dec 12 14:23:56 crc kubenswrapper[5108]: I1212 14:23:56.052743 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931albknz"] Dec 12 14:23:56 crc kubenswrapper[5108]: I1212 14:23:56.052937 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931albknz" Dec 12 14:23:56 crc kubenswrapper[5108]: I1212 14:23:56.069062 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nwpz\" (UniqueName: \"kubernetes.io/projected/23a8adcc-ad1d-4bcc-ac7a-cd54659866b5-kube-api-access-2nwpz\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931albknz\" (UID: \"23a8adcc-ad1d-4bcc-ac7a-cd54659866b5\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931albknz" Dec 12 14:23:56 crc kubenswrapper[5108]: I1212 14:23:56.069158 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/23a8adcc-ad1d-4bcc-ac7a-cd54659866b5-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931albknz\" (UID: \"23a8adcc-ad1d-4bcc-ac7a-cd54659866b5\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931albknz" Dec 12 14:23:56 crc kubenswrapper[5108]: I1212 14:23:56.069196 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/23a8adcc-ad1d-4bcc-ac7a-cd54659866b5-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931albknz\" (UID: \"23a8adcc-ad1d-4bcc-ac7a-cd54659866b5\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931albknz" Dec 12 14:23:56 crc kubenswrapper[5108]: I1212 14:23:56.169993 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2nwpz\" (UniqueName: \"kubernetes.io/projected/23a8adcc-ad1d-4bcc-ac7a-cd54659866b5-kube-api-access-2nwpz\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931albknz\" (UID: \"23a8adcc-ad1d-4bcc-ac7a-cd54659866b5\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931albknz" Dec 12 14:23:56 crc kubenswrapper[5108]: I1212 14:23:56.170050 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/23a8adcc-ad1d-4bcc-ac7a-cd54659866b5-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931albknz\" (UID: \"23a8adcc-ad1d-4bcc-ac7a-cd54659866b5\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931albknz" Dec 12 14:23:56 crc kubenswrapper[5108]: I1212 14:23:56.170072 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/23a8adcc-ad1d-4bcc-ac7a-cd54659866b5-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931albknz\" (UID: \"23a8adcc-ad1d-4bcc-ac7a-cd54659866b5\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931albknz" Dec 12 14:23:56 crc kubenswrapper[5108]: I1212 14:23:56.170641 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/23a8adcc-ad1d-4bcc-ac7a-cd54659866b5-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931albknz\" (UID: \"23a8adcc-ad1d-4bcc-ac7a-cd54659866b5\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931albknz" Dec 12 14:23:56 crc kubenswrapper[5108]: I1212 14:23:56.170827 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/23a8adcc-ad1d-4bcc-ac7a-cd54659866b5-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931albknz\" (UID: \"23a8adcc-ad1d-4bcc-ac7a-cd54659866b5\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931albknz" Dec 12 14:23:56 crc kubenswrapper[5108]: I1212 14:23:56.190664 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2nwpz\" (UniqueName: \"kubernetes.io/projected/23a8adcc-ad1d-4bcc-ac7a-cd54659866b5-kube-api-access-2nwpz\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931albknz\" (UID: \"23a8adcc-ad1d-4bcc-ac7a-cd54659866b5\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931albknz" Dec 12 14:23:56 crc kubenswrapper[5108]: I1212 14:23:56.368887 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931albknz" Dec 12 14:23:56 crc kubenswrapper[5108]: I1212 14:23:56.626153 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-82wgn" event={"ID":"56072627-b825-4a82-a2bb-d7af04666390","Type":"ContainerStarted","Data":"bca26d145ec3370e1afb25bfeeba78e579355f38e96d95dfff555176a915bd17"} Dec 12 14:23:56 crc kubenswrapper[5108]: I1212 14:23:56.826532 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5eqwf4q" Dec 12 14:23:56 crc kubenswrapper[5108]: I1212 14:23:56.897792 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931albknz"] Dec 12 14:23:56 crc kubenswrapper[5108]: I1212 14:23:56.910503 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2264e822-154c-48fa-a2ba-0264faf1df18-util\") pod \"2264e822-154c-48fa-a2ba-0264faf1df18\" (UID: \"2264e822-154c-48fa-a2ba-0264faf1df18\") " Dec 12 14:23:56 crc kubenswrapper[5108]: I1212 14:23:56.910680 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8rf8l\" (UniqueName: \"kubernetes.io/projected/2264e822-154c-48fa-a2ba-0264faf1df18-kube-api-access-8rf8l\") pod \"2264e822-154c-48fa-a2ba-0264faf1df18\" (UID: \"2264e822-154c-48fa-a2ba-0264faf1df18\") " Dec 12 14:23:56 crc kubenswrapper[5108]: I1212 14:23:56.910863 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2264e822-154c-48fa-a2ba-0264faf1df18-bundle\") pod \"2264e822-154c-48fa-a2ba-0264faf1df18\" (UID: \"2264e822-154c-48fa-a2ba-0264faf1df18\") " Dec 12 14:23:56 crc kubenswrapper[5108]: I1212 14:23:56.911806 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2264e822-154c-48fa-a2ba-0264faf1df18-bundle" (OuterVolumeSpecName: "bundle") pod "2264e822-154c-48fa-a2ba-0264faf1df18" (UID: "2264e822-154c-48fa-a2ba-0264faf1df18"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:23:56 crc kubenswrapper[5108]: I1212 14:23:56.917232 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2264e822-154c-48fa-a2ba-0264faf1df18-kube-api-access-8rf8l" (OuterVolumeSpecName: "kube-api-access-8rf8l") pod "2264e822-154c-48fa-a2ba-0264faf1df18" (UID: "2264e822-154c-48fa-a2ba-0264faf1df18"). InnerVolumeSpecName "kube-api-access-8rf8l". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:23:56 crc kubenswrapper[5108]: I1212 14:23:56.920449 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2264e822-154c-48fa-a2ba-0264faf1df18-util" (OuterVolumeSpecName: "util") pod "2264e822-154c-48fa-a2ba-0264faf1df18" (UID: "2264e822-154c-48fa-a2ba-0264faf1df18"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:23:57 crc kubenswrapper[5108]: I1212 14:23:57.012369 5108 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2264e822-154c-48fa-a2ba-0264faf1df18-util\") on node \"crc\" DevicePath \"\"" Dec 12 14:23:57 crc kubenswrapper[5108]: I1212 14:23:57.012406 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8rf8l\" (UniqueName: \"kubernetes.io/projected/2264e822-154c-48fa-a2ba-0264faf1df18-kube-api-access-8rf8l\") on node \"crc\" DevicePath \"\"" Dec 12 14:23:57 crc kubenswrapper[5108]: I1212 14:23:57.012420 5108 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2264e822-154c-48fa-a2ba-0264faf1df18-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 14:23:57 crc kubenswrapper[5108]: I1212 14:23:57.633143 5108 generic.go:358] "Generic (PLEG): container finished" podID="56072627-b825-4a82-a2bb-d7af04666390" containerID="bca26d145ec3370e1afb25bfeeba78e579355f38e96d95dfff555176a915bd17" exitCode=0 Dec 12 14:23:57 crc kubenswrapper[5108]: I1212 14:23:57.633193 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-82wgn" event={"ID":"56072627-b825-4a82-a2bb-d7af04666390","Type":"ContainerDied","Data":"bca26d145ec3370e1afb25bfeeba78e579355f38e96d95dfff555176a915bd17"} Dec 12 14:23:57 crc kubenswrapper[5108]: I1212 14:23:57.635091 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931albknz" event={"ID":"23a8adcc-ad1d-4bcc-ac7a-cd54659866b5","Type":"ContainerStarted","Data":"81615fb9dd7c3119a61711fc7c2d3cd159c266faf4ae0399d7b4db5b65d1e793"} Dec 12 14:23:57 crc kubenswrapper[5108]: I1212 14:23:57.635133 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931albknz" event={"ID":"23a8adcc-ad1d-4bcc-ac7a-cd54659866b5","Type":"ContainerStarted","Data":"c7b8bd10dbabcf072dab22ea22bbc60af4fba0332899f2c50481ebefa8b3dc95"} Dec 12 14:23:57 crc kubenswrapper[5108]: I1212 14:23:57.637313 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5eqwf4q" Dec 12 14:23:57 crc kubenswrapper[5108]: I1212 14:23:57.637370 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5eqwf4q" event={"ID":"2264e822-154c-48fa-a2ba-0264faf1df18","Type":"ContainerDied","Data":"882e8e24f954b86fd7f50f45146b0aeb748ac91967e2a16f32ef29516f607366"} Dec 12 14:23:57 crc kubenswrapper[5108]: I1212 14:23:57.637407 5108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="882e8e24f954b86fd7f50f45146b0aeb748ac91967e2a16f32ef29516f607366" Dec 12 14:23:58 crc kubenswrapper[5108]: I1212 14:23:58.645887 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-82wgn" event={"ID":"56072627-b825-4a82-a2bb-d7af04666390","Type":"ContainerStarted","Data":"d6956dfd694c89c9133870e3b3e872c73d2544d35543ef19ad518b578683c13b"} Dec 12 14:23:58 crc kubenswrapper[5108]: I1212 14:23:58.668108 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-82wgn" podStartSLOduration=3.962186717 podStartE2EDuration="4.668090458s" podCreationTimestamp="2025-12-12 14:23:54 +0000 UTC" firstStartedPulling="2025-12-12 14:23:55.61810627 +0000 UTC m=+788.526097429" lastFinishedPulling="2025-12-12 14:23:56.324010011 +0000 UTC m=+789.232001170" observedRunningTime="2025-12-12 14:23:58.665118349 +0000 UTC m=+791.573109528" watchObservedRunningTime="2025-12-12 14:23:58.668090458 +0000 UTC m=+791.576081627" Dec 12 14:23:58 crc kubenswrapper[5108]: I1212 14:23:58.930579 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-86648f486b-k7fwg"] Dec 12 14:23:58 crc kubenswrapper[5108]: I1212 14:23:58.931280 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2264e822-154c-48fa-a2ba-0264faf1df18" containerName="util" Dec 12 14:23:58 crc kubenswrapper[5108]: I1212 14:23:58.931302 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="2264e822-154c-48fa-a2ba-0264faf1df18" containerName="util" Dec 12 14:23:58 crc kubenswrapper[5108]: I1212 14:23:58.931322 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2264e822-154c-48fa-a2ba-0264faf1df18" containerName="extract" Dec 12 14:23:58 crc kubenswrapper[5108]: I1212 14:23:58.931330 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="2264e822-154c-48fa-a2ba-0264faf1df18" containerName="extract" Dec 12 14:23:58 crc kubenswrapper[5108]: I1212 14:23:58.931365 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2264e822-154c-48fa-a2ba-0264faf1df18" containerName="pull" Dec 12 14:23:58 crc kubenswrapper[5108]: I1212 14:23:58.931376 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="2264e822-154c-48fa-a2ba-0264faf1df18" containerName="pull" Dec 12 14:23:58 crc kubenswrapper[5108]: I1212 14:23:58.931490 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="2264e822-154c-48fa-a2ba-0264faf1df18" containerName="extract" Dec 12 14:23:59 crc kubenswrapper[5108]: I1212 14:23:59.243382 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-86648f486b-k7fwg"] Dec 12 14:23:59 crc kubenswrapper[5108]: I1212 14:23:59.243424 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-64b46fc675-4gq2l"] Dec 12 14:23:59 crc kubenswrapper[5108]: I1212 14:23:59.243711 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-86648f486b-k7fwg" Dec 12 14:23:59 crc kubenswrapper[5108]: I1212 14:23:59.251129 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-64b46fc675-7jdvg"] Dec 12 14:23:59 crc kubenswrapper[5108]: I1212 14:23:59.251493 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-64b46fc675-4gq2l" Dec 12 14:23:59 crc kubenswrapper[5108]: I1212 14:23:59.251952 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-dockercfg-xz6tr\"" Dec 12 14:23:59 crc kubenswrapper[5108]: I1212 14:23:59.252145 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operators\"/\"openshift-service-ca.crt\"" Dec 12 14:23:59 crc kubenswrapper[5108]: I1212 14:23:59.254170 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operators\"/\"kube-root-ca.crt\"" Dec 12 14:23:59 crc kubenswrapper[5108]: I1212 14:23:59.254728 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-64b46fc675-4gq2l"] Dec 12 14:23:59 crc kubenswrapper[5108]: I1212 14:23:59.254754 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-64b46fc675-7jdvg"] Dec 12 14:23:59 crc kubenswrapper[5108]: I1212 14:23:59.254840 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-64b46fc675-7jdvg" Dec 12 14:23:59 crc kubenswrapper[5108]: I1212 14:23:59.255119 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-admission-webhook-dockercfg-5ck6q\"" Dec 12 14:23:59 crc kubenswrapper[5108]: I1212 14:23:59.255597 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-admission-webhook-service-cert\"" Dec 12 14:23:59 crc kubenswrapper[5108]: I1212 14:23:59.351782 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-78c97476f4-7b4z7"] Dec 12 14:23:59 crc kubenswrapper[5108]: I1212 14:23:59.354218 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8df8\" (UniqueName: \"kubernetes.io/projected/38c49b68-5b92-4f77-89ac-08254e1e9ff2-kube-api-access-d8df8\") pod \"obo-prometheus-operator-86648f486b-k7fwg\" (UID: \"38c49b68-5b92-4f77-89ac-08254e1e9ff2\") " pod="openshift-operators/obo-prometheus-operator-86648f486b-k7fwg" Dec 12 14:23:59 crc kubenswrapper[5108]: I1212 14:23:59.354299 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/51006614-153b-4ead-ac0e-5eace2391fb8-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-64b46fc675-4gq2l\" (UID: \"51006614-153b-4ead-ac0e-5eace2391fb8\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-64b46fc675-4gq2l" Dec 12 14:23:59 crc kubenswrapper[5108]: I1212 14:23:59.354333 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/51006614-153b-4ead-ac0e-5eace2391fb8-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-64b46fc675-4gq2l\" (UID: \"51006614-153b-4ead-ac0e-5eace2391fb8\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-64b46fc675-4gq2l" Dec 12 14:23:59 crc kubenswrapper[5108]: I1212 14:23:59.354361 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/93182e75-5a34-46e4-981f-bc309d02f92e-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-64b46fc675-7jdvg\" (UID: \"93182e75-5a34-46e4-981f-bc309d02f92e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-64b46fc675-7jdvg" Dec 12 14:23:59 crc kubenswrapper[5108]: I1212 14:23:59.354544 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/93182e75-5a34-46e4-981f-bc309d02f92e-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-64b46fc675-7jdvg\" (UID: \"93182e75-5a34-46e4-981f-bc309d02f92e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-64b46fc675-7jdvg" Dec 12 14:23:59 crc kubenswrapper[5108]: I1212 14:23:59.355598 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-78c97476f4-7b4z7" Dec 12 14:23:59 crc kubenswrapper[5108]: I1212 14:23:59.358487 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"observability-operator-tls\"" Dec 12 14:23:59 crc kubenswrapper[5108]: I1212 14:23:59.359213 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"observability-operator-sa-dockercfg-zn6vz\"" Dec 12 14:23:59 crc kubenswrapper[5108]: I1212 14:23:59.370537 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-78c97476f4-7b4z7"] Dec 12 14:23:59 crc kubenswrapper[5108]: I1212 14:23:59.455357 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/51006614-153b-4ead-ac0e-5eace2391fb8-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-64b46fc675-4gq2l\" (UID: \"51006614-153b-4ead-ac0e-5eace2391fb8\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-64b46fc675-4gq2l" Dec 12 14:23:59 crc kubenswrapper[5108]: I1212 14:23:59.455409 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/51006614-153b-4ead-ac0e-5eace2391fb8-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-64b46fc675-4gq2l\" (UID: \"51006614-153b-4ead-ac0e-5eace2391fb8\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-64b46fc675-4gq2l" Dec 12 14:23:59 crc kubenswrapper[5108]: I1212 14:23:59.455448 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/93182e75-5a34-46e4-981f-bc309d02f92e-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-64b46fc675-7jdvg\" (UID: \"93182e75-5a34-46e4-981f-bc309d02f92e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-64b46fc675-7jdvg" Dec 12 14:23:59 crc kubenswrapper[5108]: I1212 14:23:59.455484 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/66d888bc-11dd-4c07-a7da-bcd64b81b3fd-observability-operator-tls\") pod \"observability-operator-78c97476f4-7b4z7\" (UID: \"66d888bc-11dd-4c07-a7da-bcd64b81b3fd\") " pod="openshift-operators/observability-operator-78c97476f4-7b4z7" Dec 12 14:23:59 crc kubenswrapper[5108]: I1212 14:23:59.455515 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bn9hb\" (UniqueName: \"kubernetes.io/projected/66d888bc-11dd-4c07-a7da-bcd64b81b3fd-kube-api-access-bn9hb\") pod \"observability-operator-78c97476f4-7b4z7\" (UID: \"66d888bc-11dd-4c07-a7da-bcd64b81b3fd\") " pod="openshift-operators/observability-operator-78c97476f4-7b4z7" Dec 12 14:23:59 crc kubenswrapper[5108]: I1212 14:23:59.455569 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/93182e75-5a34-46e4-981f-bc309d02f92e-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-64b46fc675-7jdvg\" (UID: \"93182e75-5a34-46e4-981f-bc309d02f92e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-64b46fc675-7jdvg" Dec 12 14:23:59 crc kubenswrapper[5108]: I1212 14:23:59.455626 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-d8df8\" (UniqueName: \"kubernetes.io/projected/38c49b68-5b92-4f77-89ac-08254e1e9ff2-kube-api-access-d8df8\") pod \"obo-prometheus-operator-86648f486b-k7fwg\" (UID: \"38c49b68-5b92-4f77-89ac-08254e1e9ff2\") " pod="openshift-operators/obo-prometheus-operator-86648f486b-k7fwg" Dec 12 14:23:59 crc kubenswrapper[5108]: I1212 14:23:59.462167 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/51006614-153b-4ead-ac0e-5eace2391fb8-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-64b46fc675-4gq2l\" (UID: \"51006614-153b-4ead-ac0e-5eace2391fb8\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-64b46fc675-4gq2l" Dec 12 14:23:59 crc kubenswrapper[5108]: I1212 14:23:59.462482 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/51006614-153b-4ead-ac0e-5eace2391fb8-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-64b46fc675-4gq2l\" (UID: \"51006614-153b-4ead-ac0e-5eace2391fb8\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-64b46fc675-4gq2l" Dec 12 14:23:59 crc kubenswrapper[5108]: I1212 14:23:59.469328 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/93182e75-5a34-46e4-981f-bc309d02f92e-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-64b46fc675-7jdvg\" (UID: \"93182e75-5a34-46e4-981f-bc309d02f92e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-64b46fc675-7jdvg" Dec 12 14:23:59 crc kubenswrapper[5108]: I1212 14:23:59.474595 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/93182e75-5a34-46e4-981f-bc309d02f92e-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-64b46fc675-7jdvg\" (UID: \"93182e75-5a34-46e4-981f-bc309d02f92e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-64b46fc675-7jdvg" Dec 12 14:23:59 crc kubenswrapper[5108]: I1212 14:23:59.487913 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8df8\" (UniqueName: \"kubernetes.io/projected/38c49b68-5b92-4f77-89ac-08254e1e9ff2-kube-api-access-d8df8\") pod \"obo-prometheus-operator-86648f486b-k7fwg\" (UID: \"38c49b68-5b92-4f77-89ac-08254e1e9ff2\") " pod="openshift-operators/obo-prometheus-operator-86648f486b-k7fwg" Dec 12 14:23:59 crc kubenswrapper[5108]: I1212 14:23:59.526611 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-68bdb49cbf-vgfwc"] Dec 12 14:23:59 crc kubenswrapper[5108]: I1212 14:23:59.535533 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-68bdb49cbf-vgfwc" Dec 12 14:23:59 crc kubenswrapper[5108]: I1212 14:23:59.538918 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-68bdb49cbf-vgfwc"] Dec 12 14:23:59 crc kubenswrapper[5108]: I1212 14:23:59.540218 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"perses-operator-dockercfg-d2zp9\"" Dec 12 14:23:59 crc kubenswrapper[5108]: I1212 14:23:59.557432 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/66d888bc-11dd-4c07-a7da-bcd64b81b3fd-observability-operator-tls\") pod \"observability-operator-78c97476f4-7b4z7\" (UID: \"66d888bc-11dd-4c07-a7da-bcd64b81b3fd\") " pod="openshift-operators/observability-operator-78c97476f4-7b4z7" Dec 12 14:23:59 crc kubenswrapper[5108]: I1212 14:23:59.557496 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bn9hb\" (UniqueName: \"kubernetes.io/projected/66d888bc-11dd-4c07-a7da-bcd64b81b3fd-kube-api-access-bn9hb\") pod \"observability-operator-78c97476f4-7b4z7\" (UID: \"66d888bc-11dd-4c07-a7da-bcd64b81b3fd\") " pod="openshift-operators/observability-operator-78c97476f4-7b4z7" Dec 12 14:23:59 crc kubenswrapper[5108]: I1212 14:23:59.564741 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/66d888bc-11dd-4c07-a7da-bcd64b81b3fd-observability-operator-tls\") pod \"observability-operator-78c97476f4-7b4z7\" (UID: \"66d888bc-11dd-4c07-a7da-bcd64b81b3fd\") " pod="openshift-operators/observability-operator-78c97476f4-7b4z7" Dec 12 14:23:59 crc kubenswrapper[5108]: I1212 14:23:59.566162 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-86648f486b-k7fwg" Dec 12 14:23:59 crc kubenswrapper[5108]: I1212 14:23:59.578476 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-64b46fc675-4gq2l" Dec 12 14:23:59 crc kubenswrapper[5108]: I1212 14:23:59.592238 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-64b46fc675-7jdvg" Dec 12 14:23:59 crc kubenswrapper[5108]: I1212 14:23:59.603368 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bn9hb\" (UniqueName: \"kubernetes.io/projected/66d888bc-11dd-4c07-a7da-bcd64b81b3fd-kube-api-access-bn9hb\") pod \"observability-operator-78c97476f4-7b4z7\" (UID: \"66d888bc-11dd-4c07-a7da-bcd64b81b3fd\") " pod="openshift-operators/observability-operator-78c97476f4-7b4z7" Dec 12 14:23:59 crc kubenswrapper[5108]: I1212 14:23:59.658837 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lq5gf\" (UniqueName: \"kubernetes.io/projected/66e7e151-771c-4d57-b247-efb889eb96aa-kube-api-access-lq5gf\") pod \"perses-operator-68bdb49cbf-vgfwc\" (UID: \"66e7e151-771c-4d57-b247-efb889eb96aa\") " pod="openshift-operators/perses-operator-68bdb49cbf-vgfwc" Dec 12 14:23:59 crc kubenswrapper[5108]: I1212 14:23:59.659239 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/66e7e151-771c-4d57-b247-efb889eb96aa-openshift-service-ca\") pod \"perses-operator-68bdb49cbf-vgfwc\" (UID: \"66e7e151-771c-4d57-b247-efb889eb96aa\") " pod="openshift-operators/perses-operator-68bdb49cbf-vgfwc" Dec 12 14:23:59 crc kubenswrapper[5108]: I1212 14:23:59.674578 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-78c97476f4-7b4z7" Dec 12 14:23:59 crc kubenswrapper[5108]: I1212 14:23:59.687852 5108 generic.go:358] "Generic (PLEG): container finished" podID="23a8adcc-ad1d-4bcc-ac7a-cd54659866b5" containerID="81615fb9dd7c3119a61711fc7c2d3cd159c266faf4ae0399d7b4db5b65d1e793" exitCode=0 Dec 12 14:23:59 crc kubenswrapper[5108]: I1212 14:23:59.688722 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931albknz" event={"ID":"23a8adcc-ad1d-4bcc-ac7a-cd54659866b5","Type":"ContainerDied","Data":"81615fb9dd7c3119a61711fc7c2d3cd159c266faf4ae0399d7b4db5b65d1e793"} Dec 12 14:23:59 crc kubenswrapper[5108]: I1212 14:23:59.760065 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/66e7e151-771c-4d57-b247-efb889eb96aa-openshift-service-ca\") pod \"perses-operator-68bdb49cbf-vgfwc\" (UID: \"66e7e151-771c-4d57-b247-efb889eb96aa\") " pod="openshift-operators/perses-operator-68bdb49cbf-vgfwc" Dec 12 14:23:59 crc kubenswrapper[5108]: I1212 14:23:59.760204 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lq5gf\" (UniqueName: \"kubernetes.io/projected/66e7e151-771c-4d57-b247-efb889eb96aa-kube-api-access-lq5gf\") pod \"perses-operator-68bdb49cbf-vgfwc\" (UID: \"66e7e151-771c-4d57-b247-efb889eb96aa\") " pod="openshift-operators/perses-operator-68bdb49cbf-vgfwc" Dec 12 14:23:59 crc kubenswrapper[5108]: I1212 14:23:59.761275 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/66e7e151-771c-4d57-b247-efb889eb96aa-openshift-service-ca\") pod \"perses-operator-68bdb49cbf-vgfwc\" (UID: \"66e7e151-771c-4d57-b247-efb889eb96aa\") " pod="openshift-operators/perses-operator-68bdb49cbf-vgfwc" Dec 12 14:23:59 crc kubenswrapper[5108]: I1212 14:23:59.791926 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lq5gf\" (UniqueName: \"kubernetes.io/projected/66e7e151-771c-4d57-b247-efb889eb96aa-kube-api-access-lq5gf\") pod \"perses-operator-68bdb49cbf-vgfwc\" (UID: \"66e7e151-771c-4d57-b247-efb889eb96aa\") " pod="openshift-operators/perses-operator-68bdb49cbf-vgfwc" Dec 12 14:23:59 crc kubenswrapper[5108]: I1212 14:23:59.852499 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-68bdb49cbf-vgfwc" Dec 12 14:23:59 crc kubenswrapper[5108]: I1212 14:23:59.953679 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-f8rcq"] Dec 12 14:23:59 crc kubenswrapper[5108]: I1212 14:23:59.954031 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-f8rcq" podUID="7789c133-837f-462e-baa0-f0156eb61ede" containerName="registry-server" containerID="cri-o://9cea248dc21c154b4e79e0aa9856288588b59a26ea253671ca1160b9cadff313" gracePeriod=2 Dec 12 14:24:00 crc kubenswrapper[5108]: I1212 14:24:00.093838 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-86648f486b-k7fwg"] Dec 12 14:24:00 crc kubenswrapper[5108]: I1212 14:24:00.128340 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-68bdb49cbf-vgfwc"] Dec 12 14:24:00 crc kubenswrapper[5108]: W1212 14:24:00.140348 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod66e7e151_771c_4d57_b247_efb889eb96aa.slice/crio-a8113ab4910d3ab354f326d5d2ec40496089967838785f1b770055e53b064c5e WatchSource:0}: Error finding container a8113ab4910d3ab354f326d5d2ec40496089967838785f1b770055e53b064c5e: Status 404 returned error can't find the container with id a8113ab4910d3ab354f326d5d2ec40496089967838785f1b770055e53b064c5e Dec 12 14:24:00 crc kubenswrapper[5108]: I1212 14:24:00.163327 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-64b46fc675-4gq2l"] Dec 12 14:24:00 crc kubenswrapper[5108]: W1212 14:24:00.180593 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod51006614_153b_4ead_ac0e_5eace2391fb8.slice/crio-15361340358e510db293aa1dd2de3d4c6f77a77018956173d58f4156eef0e1e1 WatchSource:0}: Error finding container 15361340358e510db293aa1dd2de3d4c6f77a77018956173d58f4156eef0e1e1: Status 404 returned error can't find the container with id 15361340358e510db293aa1dd2de3d4c6f77a77018956173d58f4156eef0e1e1 Dec 12 14:24:00 crc kubenswrapper[5108]: I1212 14:24:00.225961 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-78c97476f4-7b4z7"] Dec 12 14:24:00 crc kubenswrapper[5108]: I1212 14:24:00.231969 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-64b46fc675-7jdvg"] Dec 12 14:24:00 crc kubenswrapper[5108]: W1212 14:24:00.235093 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod66d888bc_11dd_4c07_a7da_bcd64b81b3fd.slice/crio-fdb452c4f5e33e4535b40a31f90eaaff540047dda857918949e35f0cc4856f49 WatchSource:0}: Error finding container fdb452c4f5e33e4535b40a31f90eaaff540047dda857918949e35f0cc4856f49: Status 404 returned error can't find the container with id fdb452c4f5e33e4535b40a31f90eaaff540047dda857918949e35f0cc4856f49 Dec 12 14:24:00 crc kubenswrapper[5108]: W1212 14:24:00.261685 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod93182e75_5a34_46e4_981f_bc309d02f92e.slice/crio-1a2002f79ec8fb51c772e832b7f6d38bceb80707725bf76ea3c5126bbb5cd72c WatchSource:0}: Error finding container 1a2002f79ec8fb51c772e832b7f6d38bceb80707725bf76ea3c5126bbb5cd72c: Status 404 returned error can't find the container with id 1a2002f79ec8fb51c772e832b7f6d38bceb80707725bf76ea3c5126bbb5cd72c Dec 12 14:24:00 crc kubenswrapper[5108]: I1212 14:24:00.372492 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f8rcq" Dec 12 14:24:00 crc kubenswrapper[5108]: I1212 14:24:00.468563 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gr94x\" (UniqueName: \"kubernetes.io/projected/7789c133-837f-462e-baa0-f0156eb61ede-kube-api-access-gr94x\") pod \"7789c133-837f-462e-baa0-f0156eb61ede\" (UID: \"7789c133-837f-462e-baa0-f0156eb61ede\") " Dec 12 14:24:00 crc kubenswrapper[5108]: I1212 14:24:00.468746 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7789c133-837f-462e-baa0-f0156eb61ede-catalog-content\") pod \"7789c133-837f-462e-baa0-f0156eb61ede\" (UID: \"7789c133-837f-462e-baa0-f0156eb61ede\") " Dec 12 14:24:00 crc kubenswrapper[5108]: I1212 14:24:00.468764 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7789c133-837f-462e-baa0-f0156eb61ede-utilities\") pod \"7789c133-837f-462e-baa0-f0156eb61ede\" (UID: \"7789c133-837f-462e-baa0-f0156eb61ede\") " Dec 12 14:24:00 crc kubenswrapper[5108]: I1212 14:24:00.470212 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7789c133-837f-462e-baa0-f0156eb61ede-utilities" (OuterVolumeSpecName: "utilities") pod "7789c133-837f-462e-baa0-f0156eb61ede" (UID: "7789c133-837f-462e-baa0-f0156eb61ede"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:24:00 crc kubenswrapper[5108]: I1212 14:24:00.470578 5108 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7789c133-837f-462e-baa0-f0156eb61ede-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 14:24:00 crc kubenswrapper[5108]: I1212 14:24:00.477290 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7789c133-837f-462e-baa0-f0156eb61ede-kube-api-access-gr94x" (OuterVolumeSpecName: "kube-api-access-gr94x") pod "7789c133-837f-462e-baa0-f0156eb61ede" (UID: "7789c133-837f-462e-baa0-f0156eb61ede"). InnerVolumeSpecName "kube-api-access-gr94x". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:24:00 crc kubenswrapper[5108]: I1212 14:24:00.564027 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7789c133-837f-462e-baa0-f0156eb61ede-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7789c133-837f-462e-baa0-f0156eb61ede" (UID: "7789c133-837f-462e-baa0-f0156eb61ede"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:24:00 crc kubenswrapper[5108]: I1212 14:24:00.572045 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-gr94x\" (UniqueName: \"kubernetes.io/projected/7789c133-837f-462e-baa0-f0156eb61ede-kube-api-access-gr94x\") on node \"crc\" DevicePath \"\"" Dec 12 14:24:00 crc kubenswrapper[5108]: I1212 14:24:00.572123 5108 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7789c133-837f-462e-baa0-f0156eb61ede-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 14:24:00 crc kubenswrapper[5108]: I1212 14:24:00.697492 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-86648f486b-k7fwg" event={"ID":"38c49b68-5b92-4f77-89ac-08254e1e9ff2","Type":"ContainerStarted","Data":"212e234d960a99e902e6152bdfcc8a0da3a299936d32e9d94084576c501ffbac"} Dec 12 14:24:00 crc kubenswrapper[5108]: I1212 14:24:00.698756 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-68bdb49cbf-vgfwc" event={"ID":"66e7e151-771c-4d57-b247-efb889eb96aa","Type":"ContainerStarted","Data":"a8113ab4910d3ab354f326d5d2ec40496089967838785f1b770055e53b064c5e"} Dec 12 14:24:00 crc kubenswrapper[5108]: I1212 14:24:00.700093 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-64b46fc675-4gq2l" event={"ID":"51006614-153b-4ead-ac0e-5eace2391fb8","Type":"ContainerStarted","Data":"15361340358e510db293aa1dd2de3d4c6f77a77018956173d58f4156eef0e1e1"} Dec 12 14:24:00 crc kubenswrapper[5108]: I1212 14:24:00.702095 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-78c97476f4-7b4z7" event={"ID":"66d888bc-11dd-4c07-a7da-bcd64b81b3fd","Type":"ContainerStarted","Data":"fdb452c4f5e33e4535b40a31f90eaaff540047dda857918949e35f0cc4856f49"} Dec 12 14:24:00 crc kubenswrapper[5108]: I1212 14:24:00.704697 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-64b46fc675-7jdvg" event={"ID":"93182e75-5a34-46e4-981f-bc309d02f92e","Type":"ContainerStarted","Data":"1a2002f79ec8fb51c772e832b7f6d38bceb80707725bf76ea3c5126bbb5cd72c"} Dec 12 14:24:00 crc kubenswrapper[5108]: I1212 14:24:00.706932 5108 generic.go:358] "Generic (PLEG): container finished" podID="7789c133-837f-462e-baa0-f0156eb61ede" containerID="9cea248dc21c154b4e79e0aa9856288588b59a26ea253671ca1160b9cadff313" exitCode=0 Dec 12 14:24:00 crc kubenswrapper[5108]: I1212 14:24:00.706959 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f8rcq" event={"ID":"7789c133-837f-462e-baa0-f0156eb61ede","Type":"ContainerDied","Data":"9cea248dc21c154b4e79e0aa9856288588b59a26ea253671ca1160b9cadff313"} Dec 12 14:24:00 crc kubenswrapper[5108]: I1212 14:24:00.707008 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f8rcq" Dec 12 14:24:00 crc kubenswrapper[5108]: I1212 14:24:00.707026 5108 scope.go:117] "RemoveContainer" containerID="9cea248dc21c154b4e79e0aa9856288588b59a26ea253671ca1160b9cadff313" Dec 12 14:24:00 crc kubenswrapper[5108]: I1212 14:24:00.706994 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f8rcq" event={"ID":"7789c133-837f-462e-baa0-f0156eb61ede","Type":"ContainerDied","Data":"b88446c12da22bb325deb4d9034bcf32be280549edeaaddc6dc0e2d6622d3e8c"} Dec 12 14:24:00 crc kubenswrapper[5108]: I1212 14:24:00.724816 5108 scope.go:117] "RemoveContainer" containerID="6ae39a464d68ba7567712c73613f390a300da5db69ca220f162bd2f13b75052d" Dec 12 14:24:00 crc kubenswrapper[5108]: I1212 14:24:00.739950 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-f8rcq"] Dec 12 14:24:00 crc kubenswrapper[5108]: I1212 14:24:00.743867 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-f8rcq"] Dec 12 14:24:00 crc kubenswrapper[5108]: I1212 14:24:00.753144 5108 scope.go:117] "RemoveContainer" containerID="a8678a061b98d7b39e5c2daaad75b0a88b658eadf263c3097d820bd70c3cd6fd" Dec 12 14:24:00 crc kubenswrapper[5108]: I1212 14:24:00.768104 5108 scope.go:117] "RemoveContainer" containerID="9cea248dc21c154b4e79e0aa9856288588b59a26ea253671ca1160b9cadff313" Dec 12 14:24:00 crc kubenswrapper[5108]: E1212 14:24:00.768597 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9cea248dc21c154b4e79e0aa9856288588b59a26ea253671ca1160b9cadff313\": container with ID starting with 9cea248dc21c154b4e79e0aa9856288588b59a26ea253671ca1160b9cadff313 not found: ID does not exist" containerID="9cea248dc21c154b4e79e0aa9856288588b59a26ea253671ca1160b9cadff313" Dec 12 14:24:00 crc kubenswrapper[5108]: I1212 14:24:00.768654 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9cea248dc21c154b4e79e0aa9856288588b59a26ea253671ca1160b9cadff313"} err="failed to get container status \"9cea248dc21c154b4e79e0aa9856288588b59a26ea253671ca1160b9cadff313\": rpc error: code = NotFound desc = could not find container \"9cea248dc21c154b4e79e0aa9856288588b59a26ea253671ca1160b9cadff313\": container with ID starting with 9cea248dc21c154b4e79e0aa9856288588b59a26ea253671ca1160b9cadff313 not found: ID does not exist" Dec 12 14:24:00 crc kubenswrapper[5108]: I1212 14:24:00.768686 5108 scope.go:117] "RemoveContainer" containerID="6ae39a464d68ba7567712c73613f390a300da5db69ca220f162bd2f13b75052d" Dec 12 14:24:00 crc kubenswrapper[5108]: E1212 14:24:00.769160 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ae39a464d68ba7567712c73613f390a300da5db69ca220f162bd2f13b75052d\": container with ID starting with 6ae39a464d68ba7567712c73613f390a300da5db69ca220f162bd2f13b75052d not found: ID does not exist" containerID="6ae39a464d68ba7567712c73613f390a300da5db69ca220f162bd2f13b75052d" Dec 12 14:24:00 crc kubenswrapper[5108]: I1212 14:24:00.769195 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ae39a464d68ba7567712c73613f390a300da5db69ca220f162bd2f13b75052d"} err="failed to get container status \"6ae39a464d68ba7567712c73613f390a300da5db69ca220f162bd2f13b75052d\": rpc error: code = NotFound desc = could not find container \"6ae39a464d68ba7567712c73613f390a300da5db69ca220f162bd2f13b75052d\": container with ID starting with 6ae39a464d68ba7567712c73613f390a300da5db69ca220f162bd2f13b75052d not found: ID does not exist" Dec 12 14:24:00 crc kubenswrapper[5108]: I1212 14:24:00.769216 5108 scope.go:117] "RemoveContainer" containerID="a8678a061b98d7b39e5c2daaad75b0a88b658eadf263c3097d820bd70c3cd6fd" Dec 12 14:24:00 crc kubenswrapper[5108]: E1212 14:24:00.769498 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a8678a061b98d7b39e5c2daaad75b0a88b658eadf263c3097d820bd70c3cd6fd\": container with ID starting with a8678a061b98d7b39e5c2daaad75b0a88b658eadf263c3097d820bd70c3cd6fd not found: ID does not exist" containerID="a8678a061b98d7b39e5c2daaad75b0a88b658eadf263c3097d820bd70c3cd6fd" Dec 12 14:24:00 crc kubenswrapper[5108]: I1212 14:24:00.769533 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8678a061b98d7b39e5c2daaad75b0a88b658eadf263c3097d820bd70c3cd6fd"} err="failed to get container status \"a8678a061b98d7b39e5c2daaad75b0a88b658eadf263c3097d820bd70c3cd6fd\": rpc error: code = NotFound desc = could not find container \"a8678a061b98d7b39e5c2daaad75b0a88b658eadf263c3097d820bd70c3cd6fd\": container with ID starting with a8678a061b98d7b39e5c2daaad75b0a88b658eadf263c3097d820bd70c3cd6fd not found: ID does not exist" Dec 12 14:24:01 crc kubenswrapper[5108]: I1212 14:24:01.437790 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7789c133-837f-462e-baa0-f0156eb61ede" path="/var/lib/kubelet/pods/7789c133-837f-462e-baa0-f0156eb61ede/volumes" Dec 12 14:24:01 crc kubenswrapper[5108]: I1212 14:24:01.490696 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-5d9d95bf5b-cn47m" Dec 12 14:24:01 crc kubenswrapper[5108]: I1212 14:24:01.566453 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-hdk9b"] Dec 12 14:24:04 crc kubenswrapper[5108]: I1212 14:24:04.527650 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-82wgn" Dec 12 14:24:04 crc kubenswrapper[5108]: I1212 14:24:04.528315 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-82wgn" Dec 12 14:24:04 crc kubenswrapper[5108]: I1212 14:24:04.586420 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-82wgn" Dec 12 14:24:04 crc kubenswrapper[5108]: I1212 14:24:04.814765 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-82wgn" Dec 12 14:24:05 crc kubenswrapper[5108]: I1212 14:24:05.120260 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/elastic-operator-78c846dd6b-6nbxj"] Dec 12 14:24:05 crc kubenswrapper[5108]: I1212 14:24:05.120986 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7789c133-837f-462e-baa0-f0156eb61ede" containerName="extract-utilities" Dec 12 14:24:05 crc kubenswrapper[5108]: I1212 14:24:05.121010 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="7789c133-837f-462e-baa0-f0156eb61ede" containerName="extract-utilities" Dec 12 14:24:05 crc kubenswrapper[5108]: I1212 14:24:05.121032 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7789c133-837f-462e-baa0-f0156eb61ede" containerName="registry-server" Dec 12 14:24:05 crc kubenswrapper[5108]: I1212 14:24:05.121040 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="7789c133-837f-462e-baa0-f0156eb61ede" containerName="registry-server" Dec 12 14:24:05 crc kubenswrapper[5108]: I1212 14:24:05.121066 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7789c133-837f-462e-baa0-f0156eb61ede" containerName="extract-content" Dec 12 14:24:05 crc kubenswrapper[5108]: I1212 14:24:05.121072 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="7789c133-837f-462e-baa0-f0156eb61ede" containerName="extract-content" Dec 12 14:24:05 crc kubenswrapper[5108]: I1212 14:24:05.121205 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="7789c133-837f-462e-baa0-f0156eb61ede" containerName="registry-server" Dec 12 14:24:05 crc kubenswrapper[5108]: I1212 14:24:05.129396 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-78c846dd6b-6nbxj" Dec 12 14:24:05 crc kubenswrapper[5108]: I1212 14:24:05.138780 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"openshift-service-ca.crt\"" Dec 12 14:24:05 crc kubenswrapper[5108]: I1212 14:24:05.140800 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elastic-operator-service-cert\"" Dec 12 14:24:05 crc kubenswrapper[5108]: I1212 14:24:05.142532 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"kube-root-ca.crt\"" Dec 12 14:24:05 crc kubenswrapper[5108]: I1212 14:24:05.162145 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elastic-operator-dockercfg-dz47t\"" Dec 12 14:24:05 crc kubenswrapper[5108]: I1212 14:24:05.183273 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-78c846dd6b-6nbxj"] Dec 12 14:24:05 crc kubenswrapper[5108]: I1212 14:24:05.264738 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c30384ac-8dd1-4e6a-a650-d5df7b1c17d2-apiservice-cert\") pod \"elastic-operator-78c846dd6b-6nbxj\" (UID: \"c30384ac-8dd1-4e6a-a650-d5df7b1c17d2\") " pod="service-telemetry/elastic-operator-78c846dd6b-6nbxj" Dec 12 14:24:05 crc kubenswrapper[5108]: I1212 14:24:05.264949 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfgcc\" (UniqueName: \"kubernetes.io/projected/c30384ac-8dd1-4e6a-a650-d5df7b1c17d2-kube-api-access-cfgcc\") pod \"elastic-operator-78c846dd6b-6nbxj\" (UID: \"c30384ac-8dd1-4e6a-a650-d5df7b1c17d2\") " pod="service-telemetry/elastic-operator-78c846dd6b-6nbxj" Dec 12 14:24:05 crc kubenswrapper[5108]: I1212 14:24:05.265256 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c30384ac-8dd1-4e6a-a650-d5df7b1c17d2-webhook-cert\") pod \"elastic-operator-78c846dd6b-6nbxj\" (UID: \"c30384ac-8dd1-4e6a-a650-d5df7b1c17d2\") " pod="service-telemetry/elastic-operator-78c846dd6b-6nbxj" Dec 12 14:24:05 crc kubenswrapper[5108]: I1212 14:24:05.366814 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c30384ac-8dd1-4e6a-a650-d5df7b1c17d2-webhook-cert\") pod \"elastic-operator-78c846dd6b-6nbxj\" (UID: \"c30384ac-8dd1-4e6a-a650-d5df7b1c17d2\") " pod="service-telemetry/elastic-operator-78c846dd6b-6nbxj" Dec 12 14:24:05 crc kubenswrapper[5108]: I1212 14:24:05.366883 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c30384ac-8dd1-4e6a-a650-d5df7b1c17d2-apiservice-cert\") pod \"elastic-operator-78c846dd6b-6nbxj\" (UID: \"c30384ac-8dd1-4e6a-a650-d5df7b1c17d2\") " pod="service-telemetry/elastic-operator-78c846dd6b-6nbxj" Dec 12 14:24:05 crc kubenswrapper[5108]: I1212 14:24:05.366925 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-cfgcc\" (UniqueName: \"kubernetes.io/projected/c30384ac-8dd1-4e6a-a650-d5df7b1c17d2-kube-api-access-cfgcc\") pod \"elastic-operator-78c846dd6b-6nbxj\" (UID: \"c30384ac-8dd1-4e6a-a650-d5df7b1c17d2\") " pod="service-telemetry/elastic-operator-78c846dd6b-6nbxj" Dec 12 14:24:05 crc kubenswrapper[5108]: I1212 14:24:05.378867 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c30384ac-8dd1-4e6a-a650-d5df7b1c17d2-apiservice-cert\") pod \"elastic-operator-78c846dd6b-6nbxj\" (UID: \"c30384ac-8dd1-4e6a-a650-d5df7b1c17d2\") " pod="service-telemetry/elastic-operator-78c846dd6b-6nbxj" Dec 12 14:24:05 crc kubenswrapper[5108]: I1212 14:24:05.379464 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c30384ac-8dd1-4e6a-a650-d5df7b1c17d2-webhook-cert\") pod \"elastic-operator-78c846dd6b-6nbxj\" (UID: \"c30384ac-8dd1-4e6a-a650-d5df7b1c17d2\") " pod="service-telemetry/elastic-operator-78c846dd6b-6nbxj" Dec 12 14:24:05 crc kubenswrapper[5108]: I1212 14:24:05.385399 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-cfgcc\" (UniqueName: \"kubernetes.io/projected/c30384ac-8dd1-4e6a-a650-d5df7b1c17d2-kube-api-access-cfgcc\") pod \"elastic-operator-78c846dd6b-6nbxj\" (UID: \"c30384ac-8dd1-4e6a-a650-d5df7b1c17d2\") " pod="service-telemetry/elastic-operator-78c846dd6b-6nbxj" Dec 12 14:24:05 crc kubenswrapper[5108]: I1212 14:24:05.488220 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-78c846dd6b-6nbxj" Dec 12 14:24:08 crc kubenswrapper[5108]: I1212 14:24:08.363600 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-82wgn"] Dec 12 14:24:08 crc kubenswrapper[5108]: I1212 14:24:08.364384 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-82wgn" podUID="56072627-b825-4a82-a2bb-d7af04666390" containerName="registry-server" containerID="cri-o://d6956dfd694c89c9133870e3b3e872c73d2544d35543ef19ad518b578683c13b" gracePeriod=2 Dec 12 14:24:08 crc kubenswrapper[5108]: I1212 14:24:08.789345 5108 generic.go:358] "Generic (PLEG): container finished" podID="56072627-b825-4a82-a2bb-d7af04666390" containerID="d6956dfd694c89c9133870e3b3e872c73d2544d35543ef19ad518b578683c13b" exitCode=0 Dec 12 14:24:08 crc kubenswrapper[5108]: I1212 14:24:08.789585 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-82wgn" event={"ID":"56072627-b825-4a82-a2bb-d7af04666390","Type":"ContainerDied","Data":"d6956dfd694c89c9133870e3b3e872c73d2544d35543ef19ad518b578683c13b"} Dec 12 14:24:14 crc kubenswrapper[5108]: E1212 14:24:14.749022 5108 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d6956dfd694c89c9133870e3b3e872c73d2544d35543ef19ad518b578683c13b is running failed: container process not found" containerID="d6956dfd694c89c9133870e3b3e872c73d2544d35543ef19ad518b578683c13b" cmd=["grpc_health_probe","-addr=:50051"] Dec 12 14:24:14 crc kubenswrapper[5108]: E1212 14:24:14.749641 5108 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d6956dfd694c89c9133870e3b3e872c73d2544d35543ef19ad518b578683c13b is running failed: container process not found" containerID="d6956dfd694c89c9133870e3b3e872c73d2544d35543ef19ad518b578683c13b" cmd=["grpc_health_probe","-addr=:50051"] Dec 12 14:24:14 crc kubenswrapper[5108]: E1212 14:24:14.749960 5108 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d6956dfd694c89c9133870e3b3e872c73d2544d35543ef19ad518b578683c13b is running failed: container process not found" containerID="d6956dfd694c89c9133870e3b3e872c73d2544d35543ef19ad518b578683c13b" cmd=["grpc_health_probe","-addr=:50051"] Dec 12 14:24:14 crc kubenswrapper[5108]: E1212 14:24:14.750003 5108 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d6956dfd694c89c9133870e3b3e872c73d2544d35543ef19ad518b578683c13b is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-82wgn" podUID="56072627-b825-4a82-a2bb-d7af04666390" containerName="registry-server" probeResult="unknown" Dec 12 14:24:16 crc kubenswrapper[5108]: I1212 14:24:16.338366 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-82wgn" Dec 12 14:24:16 crc kubenswrapper[5108]: I1212 14:24:16.469711 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kcrsw\" (UniqueName: \"kubernetes.io/projected/56072627-b825-4a82-a2bb-d7af04666390-kube-api-access-kcrsw\") pod \"56072627-b825-4a82-a2bb-d7af04666390\" (UID: \"56072627-b825-4a82-a2bb-d7af04666390\") " Dec 12 14:24:16 crc kubenswrapper[5108]: I1212 14:24:16.469782 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56072627-b825-4a82-a2bb-d7af04666390-catalog-content\") pod \"56072627-b825-4a82-a2bb-d7af04666390\" (UID: \"56072627-b825-4a82-a2bb-d7af04666390\") " Dec 12 14:24:16 crc kubenswrapper[5108]: I1212 14:24:16.469866 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56072627-b825-4a82-a2bb-d7af04666390-utilities\") pod \"56072627-b825-4a82-a2bb-d7af04666390\" (UID: \"56072627-b825-4a82-a2bb-d7af04666390\") " Dec 12 14:24:16 crc kubenswrapper[5108]: I1212 14:24:16.475436 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/56072627-b825-4a82-a2bb-d7af04666390-utilities" (OuterVolumeSpecName: "utilities") pod "56072627-b825-4a82-a2bb-d7af04666390" (UID: "56072627-b825-4a82-a2bb-d7af04666390"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:24:16 crc kubenswrapper[5108]: I1212 14:24:16.477134 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56072627-b825-4a82-a2bb-d7af04666390-kube-api-access-kcrsw" (OuterVolumeSpecName: "kube-api-access-kcrsw") pod "56072627-b825-4a82-a2bb-d7af04666390" (UID: "56072627-b825-4a82-a2bb-d7af04666390"). InnerVolumeSpecName "kube-api-access-kcrsw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:24:16 crc kubenswrapper[5108]: I1212 14:24:16.509578 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/56072627-b825-4a82-a2bb-d7af04666390-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "56072627-b825-4a82-a2bb-d7af04666390" (UID: "56072627-b825-4a82-a2bb-d7af04666390"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:24:16 crc kubenswrapper[5108]: I1212 14:24:16.571921 5108 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56072627-b825-4a82-a2bb-d7af04666390-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 14:24:16 crc kubenswrapper[5108]: I1212 14:24:16.571960 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-kcrsw\" (UniqueName: \"kubernetes.io/projected/56072627-b825-4a82-a2bb-d7af04666390-kube-api-access-kcrsw\") on node \"crc\" DevicePath \"\"" Dec 12 14:24:16 crc kubenswrapper[5108]: I1212 14:24:16.571972 5108 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56072627-b825-4a82-a2bb-d7af04666390-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 14:24:16 crc kubenswrapper[5108]: I1212 14:24:16.843568 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-82wgn" Dec 12 14:24:16 crc kubenswrapper[5108]: I1212 14:24:16.843567 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-82wgn" event={"ID":"56072627-b825-4a82-a2bb-d7af04666390","Type":"ContainerDied","Data":"7d8c109690c84091e1c01e0ac23cedad9b0a56baa5ebea7f715ef1629838663e"} Dec 12 14:24:16 crc kubenswrapper[5108]: I1212 14:24:16.844024 5108 scope.go:117] "RemoveContainer" containerID="d6956dfd694c89c9133870e3b3e872c73d2544d35543ef19ad518b578683c13b" Dec 12 14:24:16 crc kubenswrapper[5108]: I1212 14:24:16.889332 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-82wgn"] Dec 12 14:24:16 crc kubenswrapper[5108]: I1212 14:24:16.910392 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-82wgn"] Dec 12 14:24:17 crc kubenswrapper[5108]: I1212 14:24:17.420897 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56072627-b825-4a82-a2bb-d7af04666390" path="/var/lib/kubelet/pods/56072627-b825-4a82-a2bb-d7af04666390/volumes" Dec 12 14:24:22 crc kubenswrapper[5108]: I1212 14:24:22.577773 5108 scope.go:117] "RemoveContainer" containerID="bca26d145ec3370e1afb25bfeeba78e579355f38e96d95dfff555176a915bd17" Dec 12 14:24:22 crc kubenswrapper[5108]: I1212 14:24:22.623098 5108 scope.go:117] "RemoveContainer" containerID="b82f5dfc4fb48cec54990a3ff5b247f1946f48e4c65542b875d1eda7a035315e" Dec 12 14:24:22 crc kubenswrapper[5108]: I1212 14:24:22.878848 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-64b46fc675-7jdvg" event={"ID":"93182e75-5a34-46e4-981f-bc309d02f92e","Type":"ContainerStarted","Data":"db9fb29e31d80edee7bcbcec220ff5a9fb1f742220fdc435e433308334556896"} Dec 12 14:24:22 crc kubenswrapper[5108]: I1212 14:24:22.882857 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-78c846dd6b-6nbxj"] Dec 12 14:24:22 crc kubenswrapper[5108]: I1212 14:24:22.912370 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-64b46fc675-7jdvg" podStartSLOduration=1.5823028350000001 podStartE2EDuration="23.912346929s" podCreationTimestamp="2025-12-12 14:23:59 +0000 UTC" firstStartedPulling="2025-12-12 14:24:00.263970165 +0000 UTC m=+793.171961324" lastFinishedPulling="2025-12-12 14:24:22.594014259 +0000 UTC m=+815.502005418" observedRunningTime="2025-12-12 14:24:22.903313969 +0000 UTC m=+815.811305138" watchObservedRunningTime="2025-12-12 14:24:22.912346929 +0000 UTC m=+815.820338088" Dec 12 14:24:23 crc kubenswrapper[5108]: I1212 14:24:23.886550 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-78c846dd6b-6nbxj" event={"ID":"c30384ac-8dd1-4e6a-a650-d5df7b1c17d2","Type":"ContainerStarted","Data":"9a13cfbe65829ae06ba666d76a0a5752abb3be5d2c76509531023f4c63c149e6"} Dec 12 14:24:23 crc kubenswrapper[5108]: I1212 14:24:23.888401 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-68bdb49cbf-vgfwc" event={"ID":"66e7e151-771c-4d57-b247-efb889eb96aa","Type":"ContainerStarted","Data":"2caab24ef14f496ac7bdc7ed7cbda5afd8850f03903f7633ce596c123bbf324a"} Dec 12 14:24:23 crc kubenswrapper[5108]: I1212 14:24:23.889183 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operators/perses-operator-68bdb49cbf-vgfwc" Dec 12 14:24:23 crc kubenswrapper[5108]: I1212 14:24:23.890673 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-64b46fc675-4gq2l" event={"ID":"51006614-153b-4ead-ac0e-5eace2391fb8","Type":"ContainerStarted","Data":"92c6091a610751b21958b647e08e789e9a356b792ff80c3cdb01232a1296f86b"} Dec 12 14:24:23 crc kubenswrapper[5108]: I1212 14:24:23.892940 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-78c97476f4-7b4z7" event={"ID":"66d888bc-11dd-4c07-a7da-bcd64b81b3fd","Type":"ContainerStarted","Data":"f6d93dde3904d4514d139214cdc6996aee442f5f769da7725b5f9da246ce7355"} Dec 12 14:24:23 crc kubenswrapper[5108]: I1212 14:24:23.893393 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operators/observability-operator-78c97476f4-7b4z7" Dec 12 14:24:23 crc kubenswrapper[5108]: I1212 14:24:23.894900 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-78c97476f4-7b4z7" Dec 12 14:24:23 crc kubenswrapper[5108]: I1212 14:24:23.895478 5108 generic.go:358] "Generic (PLEG): container finished" podID="23a8adcc-ad1d-4bcc-ac7a-cd54659866b5" containerID="b0702227357ca7c1a9a5f41fc9083443645675e523dc2e1d9912fef46e241169" exitCode=0 Dec 12 14:24:23 crc kubenswrapper[5108]: I1212 14:24:23.895551 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931albknz" event={"ID":"23a8adcc-ad1d-4bcc-ac7a-cd54659866b5","Type":"ContainerDied","Data":"b0702227357ca7c1a9a5f41fc9083443645675e523dc2e1d9912fef46e241169"} Dec 12 14:24:23 crc kubenswrapper[5108]: I1212 14:24:23.897938 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-86648f486b-k7fwg" event={"ID":"38c49b68-5b92-4f77-89ac-08254e1e9ff2","Type":"ContainerStarted","Data":"4379371e8808bc4d83fc7b7f418fd93308af94dc9e82de849a53fbd896dc3830"} Dec 12 14:24:23 crc kubenswrapper[5108]: I1212 14:24:23.914816 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-68bdb49cbf-vgfwc" podStartSLOduration=2.46619866 podStartE2EDuration="24.914799119s" podCreationTimestamp="2025-12-12 14:23:59 +0000 UTC" firstStartedPulling="2025-12-12 14:24:00.143337304 +0000 UTC m=+793.051328463" lastFinishedPulling="2025-12-12 14:24:22.591937763 +0000 UTC m=+815.499928922" observedRunningTime="2025-12-12 14:24:23.909980031 +0000 UTC m=+816.817971190" watchObservedRunningTime="2025-12-12 14:24:23.914799119 +0000 UTC m=+816.822790278" Dec 12 14:24:24 crc kubenswrapper[5108]: I1212 14:24:24.017851 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-78c97476f4-7b4z7" podStartSLOduration=2.634960657 podStartE2EDuration="25.017834461s" podCreationTimestamp="2025-12-12 14:23:59 +0000 UTC" firstStartedPulling="2025-12-12 14:24:00.24764215 +0000 UTC m=+793.155633309" lastFinishedPulling="2025-12-12 14:24:22.630515953 +0000 UTC m=+815.538507113" observedRunningTime="2025-12-12 14:24:23.97885268 +0000 UTC m=+816.886843859" watchObservedRunningTime="2025-12-12 14:24:24.017834461 +0000 UTC m=+816.925825620" Dec 12 14:24:24 crc kubenswrapper[5108]: I1212 14:24:24.019610 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-64b46fc675-4gq2l" podStartSLOduration=2.5719221340000002 podStartE2EDuration="25.019601728s" podCreationTimestamp="2025-12-12 14:23:59 +0000 UTC" firstStartedPulling="2025-12-12 14:24:00.183261901 +0000 UTC m=+793.091253060" lastFinishedPulling="2025-12-12 14:24:22.630941485 +0000 UTC m=+815.538932654" observedRunningTime="2025-12-12 14:24:24.017631465 +0000 UTC m=+816.925622644" watchObservedRunningTime="2025-12-12 14:24:24.019601728 +0000 UTC m=+816.927592887" Dec 12 14:24:24 crc kubenswrapper[5108]: I1212 14:24:24.041509 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-86648f486b-k7fwg" podStartSLOduration=3.550952364 podStartE2EDuration="26.041486352s" podCreationTimestamp="2025-12-12 14:23:58 +0000 UTC" firstStartedPulling="2025-12-12 14:24:00.102195136 +0000 UTC m=+793.010186295" lastFinishedPulling="2025-12-12 14:24:22.592729124 +0000 UTC m=+815.500720283" observedRunningTime="2025-12-12 14:24:24.03991273 +0000 UTC m=+816.947903909" watchObservedRunningTime="2025-12-12 14:24:24.041486352 +0000 UTC m=+816.949477511" Dec 12 14:24:24 crc kubenswrapper[5108]: I1212 14:24:24.904293 5108 generic.go:358] "Generic (PLEG): container finished" podID="23a8adcc-ad1d-4bcc-ac7a-cd54659866b5" containerID="0bb4045aa7da9099553ad88a42bb6ba8c1b448eecd1aaf9029bb258d2e4db0d0" exitCode=0 Dec 12 14:24:24 crc kubenswrapper[5108]: I1212 14:24:24.904489 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931albknz" event={"ID":"23a8adcc-ad1d-4bcc-ac7a-cd54659866b5","Type":"ContainerDied","Data":"0bb4045aa7da9099553ad88a42bb6ba8c1b448eecd1aaf9029bb258d2e4db0d0"} Dec 12 14:24:26 crc kubenswrapper[5108]: I1212 14:24:26.637500 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-image-registry/image-registry-66587d64c8-hdk9b" podUID="78c92fa7-6dbe-4fef-8495-6dc6fe162b22" containerName="registry" containerID="cri-o://58720441a8c7f655a0c4ceb8b2d66a5315e2354e549c69ef1b43505d3e85ff43" gracePeriod=30 Dec 12 14:24:26 crc kubenswrapper[5108]: I1212 14:24:26.932998 5108 generic.go:358] "Generic (PLEG): container finished" podID="78c92fa7-6dbe-4fef-8495-6dc6fe162b22" containerID="58720441a8c7f655a0c4ceb8b2d66a5315e2354e549c69ef1b43505d3e85ff43" exitCode=0 Dec 12 14:24:26 crc kubenswrapper[5108]: I1212 14:24:26.933162 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-hdk9b" event={"ID":"78c92fa7-6dbe-4fef-8495-6dc6fe162b22","Type":"ContainerDied","Data":"58720441a8c7f655a0c4ceb8b2d66a5315e2354e549c69ef1b43505d3e85ff43"} Dec 12 14:24:26 crc kubenswrapper[5108]: I1212 14:24:26.936677 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931albknz" event={"ID":"23a8adcc-ad1d-4bcc-ac7a-cd54659866b5","Type":"ContainerDied","Data":"c7b8bd10dbabcf072dab22ea22bbc60af4fba0332899f2c50481ebefa8b3dc95"} Dec 12 14:24:26 crc kubenswrapper[5108]: I1212 14:24:26.936706 5108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c7b8bd10dbabcf072dab22ea22bbc60af4fba0332899f2c50481ebefa8b3dc95" Dec 12 14:24:27 crc kubenswrapper[5108]: I1212 14:24:27.091467 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931albknz" Dec 12 14:24:27 crc kubenswrapper[5108]: I1212 14:24:27.103456 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2nwpz\" (UniqueName: \"kubernetes.io/projected/23a8adcc-ad1d-4bcc-ac7a-cd54659866b5-kube-api-access-2nwpz\") pod \"23a8adcc-ad1d-4bcc-ac7a-cd54659866b5\" (UID: \"23a8adcc-ad1d-4bcc-ac7a-cd54659866b5\") " Dec 12 14:24:27 crc kubenswrapper[5108]: I1212 14:24:27.103530 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/23a8adcc-ad1d-4bcc-ac7a-cd54659866b5-bundle\") pod \"23a8adcc-ad1d-4bcc-ac7a-cd54659866b5\" (UID: \"23a8adcc-ad1d-4bcc-ac7a-cd54659866b5\") " Dec 12 14:24:27 crc kubenswrapper[5108]: I1212 14:24:27.103636 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/23a8adcc-ad1d-4bcc-ac7a-cd54659866b5-util\") pod \"23a8adcc-ad1d-4bcc-ac7a-cd54659866b5\" (UID: \"23a8adcc-ad1d-4bcc-ac7a-cd54659866b5\") " Dec 12 14:24:27 crc kubenswrapper[5108]: I1212 14:24:27.104817 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/23a8adcc-ad1d-4bcc-ac7a-cd54659866b5-bundle" (OuterVolumeSpecName: "bundle") pod "23a8adcc-ad1d-4bcc-ac7a-cd54659866b5" (UID: "23a8adcc-ad1d-4bcc-ac7a-cd54659866b5"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:24:27 crc kubenswrapper[5108]: I1212 14:24:27.110665 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23a8adcc-ad1d-4bcc-ac7a-cd54659866b5-kube-api-access-2nwpz" (OuterVolumeSpecName: "kube-api-access-2nwpz") pod "23a8adcc-ad1d-4bcc-ac7a-cd54659866b5" (UID: "23a8adcc-ad1d-4bcc-ac7a-cd54659866b5"). InnerVolumeSpecName "kube-api-access-2nwpz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:24:27 crc kubenswrapper[5108]: I1212 14:24:27.123415 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/23a8adcc-ad1d-4bcc-ac7a-cd54659866b5-util" (OuterVolumeSpecName: "util") pod "23a8adcc-ad1d-4bcc-ac7a-cd54659866b5" (UID: "23a8adcc-ad1d-4bcc-ac7a-cd54659866b5"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:24:27 crc kubenswrapper[5108]: I1212 14:24:27.206280 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-hdk9b" Dec 12 14:24:27 crc kubenswrapper[5108]: I1212 14:24:27.206927 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2nwpz\" (UniqueName: \"kubernetes.io/projected/23a8adcc-ad1d-4bcc-ac7a-cd54659866b5-kube-api-access-2nwpz\") on node \"crc\" DevicePath \"\"" Dec 12 14:24:27 crc kubenswrapper[5108]: I1212 14:24:27.206968 5108 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/23a8adcc-ad1d-4bcc-ac7a-cd54659866b5-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 14:24:27 crc kubenswrapper[5108]: I1212 14:24:27.206980 5108 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/23a8adcc-ad1d-4bcc-ac7a-cd54659866b5-util\") on node \"crc\" DevicePath \"\"" Dec 12 14:24:27 crc kubenswrapper[5108]: I1212 14:24:27.307919 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/78c92fa7-6dbe-4fef-8495-6dc6fe162b22-bound-sa-token\") pod \"78c92fa7-6dbe-4fef-8495-6dc6fe162b22\" (UID: \"78c92fa7-6dbe-4fef-8495-6dc6fe162b22\") " Dec 12 14:24:27 crc kubenswrapper[5108]: I1212 14:24:27.308264 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/78c92fa7-6dbe-4fef-8495-6dc6fe162b22-registry-certificates\") pod \"78c92fa7-6dbe-4fef-8495-6dc6fe162b22\" (UID: \"78c92fa7-6dbe-4fef-8495-6dc6fe162b22\") " Dec 12 14:24:27 crc kubenswrapper[5108]: I1212 14:24:27.308369 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/78c92fa7-6dbe-4fef-8495-6dc6fe162b22-trusted-ca\") pod \"78c92fa7-6dbe-4fef-8495-6dc6fe162b22\" (UID: \"78c92fa7-6dbe-4fef-8495-6dc6fe162b22\") " Dec 12 14:24:27 crc kubenswrapper[5108]: I1212 14:24:27.308463 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/78c92fa7-6dbe-4fef-8495-6dc6fe162b22-ca-trust-extracted\") pod \"78c92fa7-6dbe-4fef-8495-6dc6fe162b22\" (UID: \"78c92fa7-6dbe-4fef-8495-6dc6fe162b22\") " Dec 12 14:24:27 crc kubenswrapper[5108]: I1212 14:24:27.308573 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b4x8l\" (UniqueName: \"kubernetes.io/projected/78c92fa7-6dbe-4fef-8495-6dc6fe162b22-kube-api-access-b4x8l\") pod \"78c92fa7-6dbe-4fef-8495-6dc6fe162b22\" (UID: \"78c92fa7-6dbe-4fef-8495-6dc6fe162b22\") " Dec 12 14:24:27 crc kubenswrapper[5108]: I1212 14:24:27.309368 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78c92fa7-6dbe-4fef-8495-6dc6fe162b22-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "78c92fa7-6dbe-4fef-8495-6dc6fe162b22" (UID: "78c92fa7-6dbe-4fef-8495-6dc6fe162b22"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:24:27 crc kubenswrapper[5108]: I1212 14:24:27.309521 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/78c92fa7-6dbe-4fef-8495-6dc6fe162b22-installation-pull-secrets\") pod \"78c92fa7-6dbe-4fef-8495-6dc6fe162b22\" (UID: \"78c92fa7-6dbe-4fef-8495-6dc6fe162b22\") " Dec 12 14:24:27 crc kubenswrapper[5108]: I1212 14:24:27.310064 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"78c92fa7-6dbe-4fef-8495-6dc6fe162b22\" (UID: \"78c92fa7-6dbe-4fef-8495-6dc6fe162b22\") " Dec 12 14:24:27 crc kubenswrapper[5108]: I1212 14:24:27.309621 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78c92fa7-6dbe-4fef-8495-6dc6fe162b22-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "78c92fa7-6dbe-4fef-8495-6dc6fe162b22" (UID: "78c92fa7-6dbe-4fef-8495-6dc6fe162b22"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:24:27 crc kubenswrapper[5108]: I1212 14:24:27.310257 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/78c92fa7-6dbe-4fef-8495-6dc6fe162b22-registry-tls\") pod \"78c92fa7-6dbe-4fef-8495-6dc6fe162b22\" (UID: \"78c92fa7-6dbe-4fef-8495-6dc6fe162b22\") " Dec 12 14:24:27 crc kubenswrapper[5108]: I1212 14:24:27.310539 5108 reconciler_common.go:299] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/78c92fa7-6dbe-4fef-8495-6dc6fe162b22-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 12 14:24:27 crc kubenswrapper[5108]: I1212 14:24:27.310597 5108 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/78c92fa7-6dbe-4fef-8495-6dc6fe162b22-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 12 14:24:27 crc kubenswrapper[5108]: I1212 14:24:27.313220 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78c92fa7-6dbe-4fef-8495-6dc6fe162b22-kube-api-access-b4x8l" (OuterVolumeSpecName: "kube-api-access-b4x8l") pod "78c92fa7-6dbe-4fef-8495-6dc6fe162b22" (UID: "78c92fa7-6dbe-4fef-8495-6dc6fe162b22"). InnerVolumeSpecName "kube-api-access-b4x8l". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:24:27 crc kubenswrapper[5108]: I1212 14:24:27.314297 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78c92fa7-6dbe-4fef-8495-6dc6fe162b22-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "78c92fa7-6dbe-4fef-8495-6dc6fe162b22" (UID: "78c92fa7-6dbe-4fef-8495-6dc6fe162b22"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:24:27 crc kubenswrapper[5108]: I1212 14:24:27.316176 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78c92fa7-6dbe-4fef-8495-6dc6fe162b22-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "78c92fa7-6dbe-4fef-8495-6dc6fe162b22" (UID: "78c92fa7-6dbe-4fef-8495-6dc6fe162b22"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:24:27 crc kubenswrapper[5108]: I1212 14:24:27.318532 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78c92fa7-6dbe-4fef-8495-6dc6fe162b22-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "78c92fa7-6dbe-4fef-8495-6dc6fe162b22" (UID: "78c92fa7-6dbe-4fef-8495-6dc6fe162b22"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:24:27 crc kubenswrapper[5108]: I1212 14:24:27.320974 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (OuterVolumeSpecName: "registry-storage") pod "78c92fa7-6dbe-4fef-8495-6dc6fe162b22" (UID: "78c92fa7-6dbe-4fef-8495-6dc6fe162b22"). InnerVolumeSpecName "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2". PluginName "kubernetes.io/csi", VolumeGIDValue "" Dec 12 14:24:27 crc kubenswrapper[5108]: I1212 14:24:27.331926 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/78c92fa7-6dbe-4fef-8495-6dc6fe162b22-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "78c92fa7-6dbe-4fef-8495-6dc6fe162b22" (UID: "78c92fa7-6dbe-4fef-8495-6dc6fe162b22"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:24:27 crc kubenswrapper[5108]: I1212 14:24:27.413306 5108 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/78c92fa7-6dbe-4fef-8495-6dc6fe162b22-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 12 14:24:27 crc kubenswrapper[5108]: I1212 14:24:27.413636 5108 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/78c92fa7-6dbe-4fef-8495-6dc6fe162b22-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 12 14:24:27 crc kubenswrapper[5108]: I1212 14:24:27.413698 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-b4x8l\" (UniqueName: \"kubernetes.io/projected/78c92fa7-6dbe-4fef-8495-6dc6fe162b22-kube-api-access-b4x8l\") on node \"crc\" DevicePath \"\"" Dec 12 14:24:27 crc kubenswrapper[5108]: I1212 14:24:27.413760 5108 reconciler_common.go:299] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/78c92fa7-6dbe-4fef-8495-6dc6fe162b22-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 12 14:24:27 crc kubenswrapper[5108]: I1212 14:24:27.413821 5108 reconciler_common.go:299] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/78c92fa7-6dbe-4fef-8495-6dc6fe162b22-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 12 14:24:27 crc kubenswrapper[5108]: I1212 14:24:27.944982 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-hdk9b" event={"ID":"78c92fa7-6dbe-4fef-8495-6dc6fe162b22","Type":"ContainerDied","Data":"a141fe4905aca3e69172259bd6c7f02624d2eff9c7543ae8c5ef11b14f87a467"} Dec 12 14:24:27 crc kubenswrapper[5108]: I1212 14:24:27.945073 5108 scope.go:117] "RemoveContainer" containerID="58720441a8c7f655a0c4ceb8b2d66a5315e2354e549c69ef1b43505d3e85ff43" Dec 12 14:24:27 crc kubenswrapper[5108]: I1212 14:24:27.945103 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-hdk9b" Dec 12 14:24:27 crc kubenswrapper[5108]: I1212 14:24:27.948658 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931albknz" Dec 12 14:24:27 crc kubenswrapper[5108]: I1212 14:24:27.949190 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-78c846dd6b-6nbxj" event={"ID":"c30384ac-8dd1-4e6a-a650-d5df7b1c17d2","Type":"ContainerStarted","Data":"f73be2160d6b8a5f5fc539caad9ed39f3da2c2311e597c033cc4ee5f5029eec8"} Dec 12 14:24:27 crc kubenswrapper[5108]: I1212 14:24:27.976902 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/elastic-operator-78c846dd6b-6nbxj" podStartSLOduration=18.989195646 podStartE2EDuration="22.976857065s" podCreationTimestamp="2025-12-12 14:24:05 +0000 UTC" firstStartedPulling="2025-12-12 14:24:22.908329302 +0000 UTC m=+815.816320461" lastFinishedPulling="2025-12-12 14:24:26.895990711 +0000 UTC m=+819.803981880" observedRunningTime="2025-12-12 14:24:27.971854392 +0000 UTC m=+820.879845581" watchObservedRunningTime="2025-12-12 14:24:27.976857065 +0000 UTC m=+820.884848224" Dec 12 14:24:27 crc kubenswrapper[5108]: I1212 14:24:27.992572 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-hdk9b"] Dec 12 14:24:28 crc kubenswrapper[5108]: I1212 14:24:28.026151 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-hdk9b"] Dec 12 14:24:28 crc kubenswrapper[5108]: I1212 14:24:28.331106 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Dec 12 14:24:28 crc kubenswrapper[5108]: I1212 14:24:28.331827 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="56072627-b825-4a82-a2bb-d7af04666390" containerName="extract-content" Dec 12 14:24:28 crc kubenswrapper[5108]: I1212 14:24:28.331850 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="56072627-b825-4a82-a2bb-d7af04666390" containerName="extract-content" Dec 12 14:24:28 crc kubenswrapper[5108]: I1212 14:24:28.331864 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="23a8adcc-ad1d-4bcc-ac7a-cd54659866b5" containerName="pull" Dec 12 14:24:28 crc kubenswrapper[5108]: I1212 14:24:28.331870 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="23a8adcc-ad1d-4bcc-ac7a-cd54659866b5" containerName="pull" Dec 12 14:24:28 crc kubenswrapper[5108]: I1212 14:24:28.331883 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="56072627-b825-4a82-a2bb-d7af04666390" containerName="registry-server" Dec 12 14:24:28 crc kubenswrapper[5108]: I1212 14:24:28.331889 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="56072627-b825-4a82-a2bb-d7af04666390" containerName="registry-server" Dec 12 14:24:28 crc kubenswrapper[5108]: I1212 14:24:28.331902 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="23a8adcc-ad1d-4bcc-ac7a-cd54659866b5" containerName="util" Dec 12 14:24:28 crc kubenswrapper[5108]: I1212 14:24:28.331907 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="23a8adcc-ad1d-4bcc-ac7a-cd54659866b5" containerName="util" Dec 12 14:24:28 crc kubenswrapper[5108]: I1212 14:24:28.331916 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="78c92fa7-6dbe-4fef-8495-6dc6fe162b22" containerName="registry" Dec 12 14:24:28 crc kubenswrapper[5108]: I1212 14:24:28.331921 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="78c92fa7-6dbe-4fef-8495-6dc6fe162b22" containerName="registry" Dec 12 14:24:28 crc kubenswrapper[5108]: I1212 14:24:28.331930 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="23a8adcc-ad1d-4bcc-ac7a-cd54659866b5" containerName="extract" Dec 12 14:24:28 crc kubenswrapper[5108]: I1212 14:24:28.331935 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="23a8adcc-ad1d-4bcc-ac7a-cd54659866b5" containerName="extract" Dec 12 14:24:28 crc kubenswrapper[5108]: I1212 14:24:28.331947 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="56072627-b825-4a82-a2bb-d7af04666390" containerName="extract-utilities" Dec 12 14:24:28 crc kubenswrapper[5108]: I1212 14:24:28.331952 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="56072627-b825-4a82-a2bb-d7af04666390" containerName="extract-utilities" Dec 12 14:24:28 crc kubenswrapper[5108]: I1212 14:24:28.332057 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="78c92fa7-6dbe-4fef-8495-6dc6fe162b22" containerName="registry" Dec 12 14:24:28 crc kubenswrapper[5108]: I1212 14:24:28.332070 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="23a8adcc-ad1d-4bcc-ac7a-cd54659866b5" containerName="extract" Dec 12 14:24:28 crc kubenswrapper[5108]: I1212 14:24:28.332093 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="56072627-b825-4a82-a2bb-d7af04666390" containerName="registry-server" Dec 12 14:24:28 crc kubenswrapper[5108]: I1212 14:24:28.336552 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Dec 12 14:24:28 crc kubenswrapper[5108]: I1212 14:24:28.339140 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"elasticsearch-es-unicast-hosts\"" Dec 12 14:24:28 crc kubenswrapper[5108]: I1212 14:24:28.339421 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-remote-ca\"" Dec 12 14:24:28 crc kubenswrapper[5108]: I1212 14:24:28.339580 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-default-es-transport-certs\"" Dec 12 14:24:28 crc kubenswrapper[5108]: I1212 14:24:28.340522 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-default-es-config\"" Dec 12 14:24:28 crc kubenswrapper[5108]: I1212 14:24:28.340772 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"elasticsearch-es-scripts\"" Dec 12 14:24:28 crc kubenswrapper[5108]: I1212 14:24:28.340903 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-dockercfg-6lz6s\"" Dec 12 14:24:28 crc kubenswrapper[5108]: I1212 14:24:28.340912 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-internal-users\"" Dec 12 14:24:28 crc kubenswrapper[5108]: I1212 14:24:28.340923 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-http-certs-internal\"" Dec 12 14:24:28 crc kubenswrapper[5108]: I1212 14:24:28.341126 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-xpack-file-realm\"" Dec 12 14:24:28 crc kubenswrapper[5108]: I1212 14:24:28.367876 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Dec 12 14:24:28 crc kubenswrapper[5108]: I1212 14:24:28.434486 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/256861da-f1b4-46d9-b253-050523c6398f-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"256861da-f1b4-46d9-b253-050523c6398f\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 14:24:28 crc kubenswrapper[5108]: I1212 14:24:28.434536 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/256861da-f1b4-46d9-b253-050523c6398f-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"256861da-f1b4-46d9-b253-050523c6398f\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 14:24:28 crc kubenswrapper[5108]: I1212 14:24:28.434592 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/256861da-f1b4-46d9-b253-050523c6398f-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"256861da-f1b4-46d9-b253-050523c6398f\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 14:24:28 crc kubenswrapper[5108]: I1212 14:24:28.434615 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/256861da-f1b4-46d9-b253-050523c6398f-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"256861da-f1b4-46d9-b253-050523c6398f\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 14:24:28 crc kubenswrapper[5108]: I1212 14:24:28.434633 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/256861da-f1b4-46d9-b253-050523c6398f-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"256861da-f1b4-46d9-b253-050523c6398f\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 14:24:28 crc kubenswrapper[5108]: I1212 14:24:28.434649 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/256861da-f1b4-46d9-b253-050523c6398f-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"256861da-f1b4-46d9-b253-050523c6398f\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 14:24:28 crc kubenswrapper[5108]: I1212 14:24:28.434732 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/256861da-f1b4-46d9-b253-050523c6398f-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"256861da-f1b4-46d9-b253-050523c6398f\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 14:24:28 crc kubenswrapper[5108]: I1212 14:24:28.434775 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/256861da-f1b4-46d9-b253-050523c6398f-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"256861da-f1b4-46d9-b253-050523c6398f\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 14:24:28 crc kubenswrapper[5108]: I1212 14:24:28.434881 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/256861da-f1b4-46d9-b253-050523c6398f-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"256861da-f1b4-46d9-b253-050523c6398f\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 14:24:28 crc kubenswrapper[5108]: I1212 14:24:28.434923 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/256861da-f1b4-46d9-b253-050523c6398f-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"256861da-f1b4-46d9-b253-050523c6398f\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 14:24:28 crc kubenswrapper[5108]: I1212 14:24:28.434998 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/256861da-f1b4-46d9-b253-050523c6398f-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"256861da-f1b4-46d9-b253-050523c6398f\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 14:24:28 crc kubenswrapper[5108]: I1212 14:24:28.435020 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/256861da-f1b4-46d9-b253-050523c6398f-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"256861da-f1b4-46d9-b253-050523c6398f\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 14:24:28 crc kubenswrapper[5108]: I1212 14:24:28.435039 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/256861da-f1b4-46d9-b253-050523c6398f-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"256861da-f1b4-46d9-b253-050523c6398f\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 14:24:28 crc kubenswrapper[5108]: I1212 14:24:28.435237 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/256861da-f1b4-46d9-b253-050523c6398f-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"256861da-f1b4-46d9-b253-050523c6398f\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 14:24:28 crc kubenswrapper[5108]: I1212 14:24:28.435374 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/256861da-f1b4-46d9-b253-050523c6398f-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"256861da-f1b4-46d9-b253-050523c6398f\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 14:24:28 crc kubenswrapper[5108]: I1212 14:24:28.536410 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/256861da-f1b4-46d9-b253-050523c6398f-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"256861da-f1b4-46d9-b253-050523c6398f\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 14:24:28 crc kubenswrapper[5108]: I1212 14:24:28.536480 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/256861da-f1b4-46d9-b253-050523c6398f-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"256861da-f1b4-46d9-b253-050523c6398f\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 14:24:28 crc kubenswrapper[5108]: I1212 14:24:28.536502 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/256861da-f1b4-46d9-b253-050523c6398f-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"256861da-f1b4-46d9-b253-050523c6398f\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 14:24:28 crc kubenswrapper[5108]: I1212 14:24:28.536524 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/256861da-f1b4-46d9-b253-050523c6398f-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"256861da-f1b4-46d9-b253-050523c6398f\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 14:24:28 crc kubenswrapper[5108]: I1212 14:24:28.536544 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/256861da-f1b4-46d9-b253-050523c6398f-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"256861da-f1b4-46d9-b253-050523c6398f\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 14:24:28 crc kubenswrapper[5108]: I1212 14:24:28.536581 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/256861da-f1b4-46d9-b253-050523c6398f-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"256861da-f1b4-46d9-b253-050523c6398f\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 14:24:28 crc kubenswrapper[5108]: I1212 14:24:28.536600 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/256861da-f1b4-46d9-b253-050523c6398f-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"256861da-f1b4-46d9-b253-050523c6398f\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 14:24:28 crc kubenswrapper[5108]: I1212 14:24:28.536628 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/256861da-f1b4-46d9-b253-050523c6398f-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"256861da-f1b4-46d9-b253-050523c6398f\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 14:24:28 crc kubenswrapper[5108]: I1212 14:24:28.536647 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/256861da-f1b4-46d9-b253-050523c6398f-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"256861da-f1b4-46d9-b253-050523c6398f\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 14:24:28 crc kubenswrapper[5108]: I1212 14:24:28.537149 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/256861da-f1b4-46d9-b253-050523c6398f-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"256861da-f1b4-46d9-b253-050523c6398f\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 14:24:28 crc kubenswrapper[5108]: I1212 14:24:28.537174 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/256861da-f1b4-46d9-b253-050523c6398f-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"256861da-f1b4-46d9-b253-050523c6398f\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 14:24:28 crc kubenswrapper[5108]: I1212 14:24:28.537229 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/256861da-f1b4-46d9-b253-050523c6398f-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"256861da-f1b4-46d9-b253-050523c6398f\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 14:24:28 crc kubenswrapper[5108]: I1212 14:24:28.537285 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/256861da-f1b4-46d9-b253-050523c6398f-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"256861da-f1b4-46d9-b253-050523c6398f\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 14:24:28 crc kubenswrapper[5108]: I1212 14:24:28.537309 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/256861da-f1b4-46d9-b253-050523c6398f-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"256861da-f1b4-46d9-b253-050523c6398f\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 14:24:28 crc kubenswrapper[5108]: I1212 14:24:28.537194 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/256861da-f1b4-46d9-b253-050523c6398f-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"256861da-f1b4-46d9-b253-050523c6398f\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 14:24:28 crc kubenswrapper[5108]: I1212 14:24:28.537367 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/256861da-f1b4-46d9-b253-050523c6398f-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"256861da-f1b4-46d9-b253-050523c6398f\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 14:24:28 crc kubenswrapper[5108]: I1212 14:24:28.537384 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/256861da-f1b4-46d9-b253-050523c6398f-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"256861da-f1b4-46d9-b253-050523c6398f\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 14:24:28 crc kubenswrapper[5108]: I1212 14:24:28.537745 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/256861da-f1b4-46d9-b253-050523c6398f-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"256861da-f1b4-46d9-b253-050523c6398f\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 14:24:28 crc kubenswrapper[5108]: I1212 14:24:28.537936 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/256861da-f1b4-46d9-b253-050523c6398f-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"256861da-f1b4-46d9-b253-050523c6398f\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 14:24:28 crc kubenswrapper[5108]: I1212 14:24:28.538280 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/256861da-f1b4-46d9-b253-050523c6398f-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"256861da-f1b4-46d9-b253-050523c6398f\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 14:24:28 crc kubenswrapper[5108]: I1212 14:24:28.538323 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/256861da-f1b4-46d9-b253-050523c6398f-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"256861da-f1b4-46d9-b253-050523c6398f\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 14:24:28 crc kubenswrapper[5108]: I1212 14:24:28.539005 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/256861da-f1b4-46d9-b253-050523c6398f-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"256861da-f1b4-46d9-b253-050523c6398f\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 14:24:28 crc kubenswrapper[5108]: I1212 14:24:28.541154 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/256861da-f1b4-46d9-b253-050523c6398f-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"256861da-f1b4-46d9-b253-050523c6398f\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 14:24:28 crc kubenswrapper[5108]: I1212 14:24:28.541157 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/256861da-f1b4-46d9-b253-050523c6398f-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"256861da-f1b4-46d9-b253-050523c6398f\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 14:24:28 crc kubenswrapper[5108]: I1212 14:24:28.541189 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/256861da-f1b4-46d9-b253-050523c6398f-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"256861da-f1b4-46d9-b253-050523c6398f\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 14:24:28 crc kubenswrapper[5108]: I1212 14:24:28.541581 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/256861da-f1b4-46d9-b253-050523c6398f-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"256861da-f1b4-46d9-b253-050523c6398f\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 14:24:28 crc kubenswrapper[5108]: I1212 14:24:28.541689 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/256861da-f1b4-46d9-b253-050523c6398f-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"256861da-f1b4-46d9-b253-050523c6398f\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 14:24:28 crc kubenswrapper[5108]: I1212 14:24:28.542029 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/256861da-f1b4-46d9-b253-050523c6398f-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"256861da-f1b4-46d9-b253-050523c6398f\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 14:24:28 crc kubenswrapper[5108]: I1212 14:24:28.542058 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/256861da-f1b4-46d9-b253-050523c6398f-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"256861da-f1b4-46d9-b253-050523c6398f\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 14:24:28 crc kubenswrapper[5108]: I1212 14:24:28.542353 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/256861da-f1b4-46d9-b253-050523c6398f-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"256861da-f1b4-46d9-b253-050523c6398f\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 14:24:28 crc kubenswrapper[5108]: I1212 14:24:28.652168 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Dec 12 14:24:28 crc kubenswrapper[5108]: I1212 14:24:28.957940 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Dec 12 14:24:29 crc kubenswrapper[5108]: I1212 14:24:29.414843 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78c92fa7-6dbe-4fef-8495-6dc6fe162b22" path="/var/lib/kubelet/pods/78c92fa7-6dbe-4fef-8495-6dc6fe162b22/volumes" Dec 12 14:24:29 crc kubenswrapper[5108]: I1212 14:24:29.967204 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"256861da-f1b4-46d9-b253-050523c6398f","Type":"ContainerStarted","Data":"65fb9ca3613c327dd557250df66e64548f7b8467948e7e5df46112ff79db374d"} Dec 12 14:24:32 crc kubenswrapper[5108]: I1212 14:24:32.973017 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-7g77v"] Dec 12 14:24:32 crc kubenswrapper[5108]: I1212 14:24:32.988806 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-7g77v"] Dec 12 14:24:32 crc kubenswrapper[5108]: I1212 14:24:32.989245 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-7g77v" Dec 12 14:24:32 crc kubenswrapper[5108]: I1212 14:24:32.992636 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager-operator\"/\"kube-root-ca.crt\"" Dec 12 14:24:32 crc kubenswrapper[5108]: I1212 14:24:32.992876 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager-operator\"/\"openshift-service-ca.crt\"" Dec 12 14:24:32 crc kubenswrapper[5108]: I1212 14:24:32.993015 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager-operator\"/\"cert-manager-operator-controller-manager-dockercfg-9r7gv\"" Dec 12 14:24:33 crc kubenswrapper[5108]: I1212 14:24:33.104474 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/01584a36-3669-4a47-8498-f852ff489c03-tmp\") pod \"cert-manager-operator-controller-manager-64c74584c4-7g77v\" (UID: \"01584a36-3669-4a47-8498-f852ff489c03\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-7g77v" Dec 12 14:24:33 crc kubenswrapper[5108]: I1212 14:24:33.104561 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gfbw\" (UniqueName: \"kubernetes.io/projected/01584a36-3669-4a47-8498-f852ff489c03-kube-api-access-7gfbw\") pod \"cert-manager-operator-controller-manager-64c74584c4-7g77v\" (UID: \"01584a36-3669-4a47-8498-f852ff489c03\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-7g77v" Dec 12 14:24:33 crc kubenswrapper[5108]: I1212 14:24:33.205710 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/01584a36-3669-4a47-8498-f852ff489c03-tmp\") pod \"cert-manager-operator-controller-manager-64c74584c4-7g77v\" (UID: \"01584a36-3669-4a47-8498-f852ff489c03\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-7g77v" Dec 12 14:24:33 crc kubenswrapper[5108]: I1212 14:24:33.206415 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7gfbw\" (UniqueName: \"kubernetes.io/projected/01584a36-3669-4a47-8498-f852ff489c03-kube-api-access-7gfbw\") pod \"cert-manager-operator-controller-manager-64c74584c4-7g77v\" (UID: \"01584a36-3669-4a47-8498-f852ff489c03\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-7g77v" Dec 12 14:24:33 crc kubenswrapper[5108]: I1212 14:24:33.207348 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/01584a36-3669-4a47-8498-f852ff489c03-tmp\") pod \"cert-manager-operator-controller-manager-64c74584c4-7g77v\" (UID: \"01584a36-3669-4a47-8498-f852ff489c03\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-7g77v" Dec 12 14:24:33 crc kubenswrapper[5108]: I1212 14:24:33.233645 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7gfbw\" (UniqueName: \"kubernetes.io/projected/01584a36-3669-4a47-8498-f852ff489c03-kube-api-access-7gfbw\") pod \"cert-manager-operator-controller-manager-64c74584c4-7g77v\" (UID: \"01584a36-3669-4a47-8498-f852ff489c03\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-7g77v" Dec 12 14:24:33 crc kubenswrapper[5108]: I1212 14:24:33.307620 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-7g77v" Dec 12 14:24:34 crc kubenswrapper[5108]: I1212 14:24:34.813518 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-7g77v"] Dec 12 14:24:35 crc kubenswrapper[5108]: I1212 14:24:35.012703 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-7g77v" event={"ID":"01584a36-3669-4a47-8498-f852ff489c03","Type":"ContainerStarted","Data":"a249a7d29d9782aa74ce11c0af1f6d084341e2f50b2eae21ced6a96269e2f723"} Dec 12 14:24:35 crc kubenswrapper[5108]: I1212 14:24:35.926647 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-68bdb49cbf-vgfwc" Dec 12 14:24:42 crc kubenswrapper[5108]: I1212 14:24:42.296487 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-jllpt"] Dec 12 14:24:43 crc kubenswrapper[5108]: I1212 14:24:43.645680 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jllpt" Dec 12 14:24:43 crc kubenswrapper[5108]: I1212 14:24:43.655228 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jllpt"] Dec 12 14:24:43 crc kubenswrapper[5108]: I1212 14:24:43.752591 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6ftd\" (UniqueName: \"kubernetes.io/projected/bb653326-865b-4f87-9a94-72cea19d0a24-kube-api-access-w6ftd\") pod \"community-operators-jllpt\" (UID: \"bb653326-865b-4f87-9a94-72cea19d0a24\") " pod="openshift-marketplace/community-operators-jllpt" Dec 12 14:24:43 crc kubenswrapper[5108]: I1212 14:24:43.752913 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb653326-865b-4f87-9a94-72cea19d0a24-utilities\") pod \"community-operators-jllpt\" (UID: \"bb653326-865b-4f87-9a94-72cea19d0a24\") " pod="openshift-marketplace/community-operators-jllpt" Dec 12 14:24:43 crc kubenswrapper[5108]: I1212 14:24:43.753055 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb653326-865b-4f87-9a94-72cea19d0a24-catalog-content\") pod \"community-operators-jllpt\" (UID: \"bb653326-865b-4f87-9a94-72cea19d0a24\") " pod="openshift-marketplace/community-operators-jllpt" Dec 12 14:24:43 crc kubenswrapper[5108]: I1212 14:24:43.854414 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb653326-865b-4f87-9a94-72cea19d0a24-utilities\") pod \"community-operators-jllpt\" (UID: \"bb653326-865b-4f87-9a94-72cea19d0a24\") " pod="openshift-marketplace/community-operators-jllpt" Dec 12 14:24:43 crc kubenswrapper[5108]: I1212 14:24:43.854470 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb653326-865b-4f87-9a94-72cea19d0a24-catalog-content\") pod \"community-operators-jllpt\" (UID: \"bb653326-865b-4f87-9a94-72cea19d0a24\") " pod="openshift-marketplace/community-operators-jllpt" Dec 12 14:24:43 crc kubenswrapper[5108]: I1212 14:24:43.854625 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-w6ftd\" (UniqueName: \"kubernetes.io/projected/bb653326-865b-4f87-9a94-72cea19d0a24-kube-api-access-w6ftd\") pod \"community-operators-jllpt\" (UID: \"bb653326-865b-4f87-9a94-72cea19d0a24\") " pod="openshift-marketplace/community-operators-jllpt" Dec 12 14:24:43 crc kubenswrapper[5108]: I1212 14:24:43.854965 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb653326-865b-4f87-9a94-72cea19d0a24-utilities\") pod \"community-operators-jllpt\" (UID: \"bb653326-865b-4f87-9a94-72cea19d0a24\") " pod="openshift-marketplace/community-operators-jllpt" Dec 12 14:24:43 crc kubenswrapper[5108]: I1212 14:24:43.855205 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb653326-865b-4f87-9a94-72cea19d0a24-catalog-content\") pod \"community-operators-jllpt\" (UID: \"bb653326-865b-4f87-9a94-72cea19d0a24\") " pod="openshift-marketplace/community-operators-jllpt" Dec 12 14:24:43 crc kubenswrapper[5108]: I1212 14:24:43.877056 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6ftd\" (UniqueName: \"kubernetes.io/projected/bb653326-865b-4f87-9a94-72cea19d0a24-kube-api-access-w6ftd\") pod \"community-operators-jllpt\" (UID: \"bb653326-865b-4f87-9a94-72cea19d0a24\") " pod="openshift-marketplace/community-operators-jllpt" Dec 12 14:24:43 crc kubenswrapper[5108]: I1212 14:24:43.966256 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jllpt" Dec 12 14:24:49 crc kubenswrapper[5108]: I1212 14:24:49.379469 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jllpt"] Dec 12 14:24:50 crc kubenswrapper[5108]: I1212 14:24:50.219773 5108 generic.go:358] "Generic (PLEG): container finished" podID="bb653326-865b-4f87-9a94-72cea19d0a24" containerID="fbeec747efa7f12739b6bcecfa7369a0211a0aae153999ed9af5f065c76675ee" exitCode=0 Dec 12 14:24:50 crc kubenswrapper[5108]: I1212 14:24:50.219832 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jllpt" event={"ID":"bb653326-865b-4f87-9a94-72cea19d0a24","Type":"ContainerDied","Data":"fbeec747efa7f12739b6bcecfa7369a0211a0aae153999ed9af5f065c76675ee"} Dec 12 14:24:50 crc kubenswrapper[5108]: I1212 14:24:50.220049 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jllpt" event={"ID":"bb653326-865b-4f87-9a94-72cea19d0a24","Type":"ContainerStarted","Data":"5068607150d3de84a5599d1537feba891f31da9ee05d3348330a52e71abd2ba6"} Dec 12 14:24:50 crc kubenswrapper[5108]: I1212 14:24:50.221585 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-7g77v" event={"ID":"01584a36-3669-4a47-8498-f852ff489c03","Type":"ContainerStarted","Data":"a5ccf73793ad890a644665dd54b1e59cefc51aada9baa3e8b6cb5e643d4d0f4e"} Dec 12 14:24:50 crc kubenswrapper[5108]: I1212 14:24:50.223450 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"256861da-f1b4-46d9-b253-050523c6398f","Type":"ContainerStarted","Data":"2e659906601bab2802c8748d1187b762deb5beff625b07565925794332f49ee8"} Dec 12 14:24:50 crc kubenswrapper[5108]: I1212 14:24:50.477102 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-7g77v" podStartSLOduration=4.324010121 podStartE2EDuration="18.477062021s" podCreationTimestamp="2025-12-12 14:24:32 +0000 UTC" firstStartedPulling="2025-12-12 14:24:34.824342883 +0000 UTC m=+827.732334042" lastFinishedPulling="2025-12-12 14:24:48.977394773 +0000 UTC m=+841.885385942" observedRunningTime="2025-12-12 14:24:50.295470041 +0000 UTC m=+843.203461210" watchObservedRunningTime="2025-12-12 14:24:50.477062021 +0000 UTC m=+843.385053190" Dec 12 14:24:50 crc kubenswrapper[5108]: I1212 14:24:50.478842 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Dec 12 14:24:50 crc kubenswrapper[5108]: I1212 14:24:50.523794 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Dec 12 14:24:51 crc kubenswrapper[5108]: I1212 14:24:51.230980 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jllpt" event={"ID":"bb653326-865b-4f87-9a94-72cea19d0a24","Type":"ContainerStarted","Data":"0cc573aa0b5cacad70622fca6c6d8678599c89e4b658ad51d1d61764e36532f7"} Dec 12 14:24:52 crc kubenswrapper[5108]: I1212 14:24:52.237859 5108 generic.go:358] "Generic (PLEG): container finished" podID="bb653326-865b-4f87-9a94-72cea19d0a24" containerID="0cc573aa0b5cacad70622fca6c6d8678599c89e4b658ad51d1d61764e36532f7" exitCode=0 Dec 12 14:24:52 crc kubenswrapper[5108]: I1212 14:24:52.237941 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jllpt" event={"ID":"bb653326-865b-4f87-9a94-72cea19d0a24","Type":"ContainerDied","Data":"0cc573aa0b5cacad70622fca6c6d8678599c89e4b658ad51d1d61764e36532f7"} Dec 12 14:24:52 crc kubenswrapper[5108]: I1212 14:24:52.239373 5108 generic.go:358] "Generic (PLEG): container finished" podID="256861da-f1b4-46d9-b253-050523c6398f" containerID="2e659906601bab2802c8748d1187b762deb5beff625b07565925794332f49ee8" exitCode=0 Dec 12 14:24:52 crc kubenswrapper[5108]: I1212 14:24:52.239587 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"256861da-f1b4-46d9-b253-050523c6398f","Type":"ContainerDied","Data":"2e659906601bab2802c8748d1187b762deb5beff625b07565925794332f49ee8"} Dec 12 14:24:52 crc kubenswrapper[5108]: I1212 14:24:52.771609 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-7894b5b9b4-xk44r"] Dec 12 14:24:53 crc kubenswrapper[5108]: I1212 14:24:53.240948 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-7894b5b9b4-xk44r"] Dec 12 14:24:53 crc kubenswrapper[5108]: I1212 14:24:53.241953 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-7894b5b9b4-xk44r" Dec 12 14:24:53 crc kubenswrapper[5108]: I1212 14:24:53.246517 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager\"/\"openshift-service-ca.crt\"" Dec 12 14:24:53 crc kubenswrapper[5108]: I1212 14:24:53.246831 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager\"/\"kube-root-ca.crt\"" Dec 12 14:24:53 crc kubenswrapper[5108]: I1212 14:24:53.247011 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-webhook-dockercfg-cmk4c\"" Dec 12 14:24:53 crc kubenswrapper[5108]: I1212 14:24:53.277894 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ddc7c180-968f-4f54-8169-960671c0cfdf-bound-sa-token\") pod \"cert-manager-webhook-7894b5b9b4-xk44r\" (UID: \"ddc7c180-968f-4f54-8169-960671c0cfdf\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-xk44r" Dec 12 14:24:53 crc kubenswrapper[5108]: I1212 14:24:53.277983 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhvcl\" (UniqueName: \"kubernetes.io/projected/ddc7c180-968f-4f54-8169-960671c0cfdf-kube-api-access-hhvcl\") pod \"cert-manager-webhook-7894b5b9b4-xk44r\" (UID: \"ddc7c180-968f-4f54-8169-960671c0cfdf\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-xk44r" Dec 12 14:24:53 crc kubenswrapper[5108]: I1212 14:24:53.379412 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ddc7c180-968f-4f54-8169-960671c0cfdf-bound-sa-token\") pod \"cert-manager-webhook-7894b5b9b4-xk44r\" (UID: \"ddc7c180-968f-4f54-8169-960671c0cfdf\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-xk44r" Dec 12 14:24:53 crc kubenswrapper[5108]: I1212 14:24:53.379518 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hhvcl\" (UniqueName: \"kubernetes.io/projected/ddc7c180-968f-4f54-8169-960671c0cfdf-kube-api-access-hhvcl\") pod \"cert-manager-webhook-7894b5b9b4-xk44r\" (UID: \"ddc7c180-968f-4f54-8169-960671c0cfdf\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-xk44r" Dec 12 14:24:53 crc kubenswrapper[5108]: I1212 14:24:53.397729 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ddc7c180-968f-4f54-8169-960671c0cfdf-bound-sa-token\") pod \"cert-manager-webhook-7894b5b9b4-xk44r\" (UID: \"ddc7c180-968f-4f54-8169-960671c0cfdf\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-xk44r" Dec 12 14:24:53 crc kubenswrapper[5108]: I1212 14:24:53.402562 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hhvcl\" (UniqueName: \"kubernetes.io/projected/ddc7c180-968f-4f54-8169-960671c0cfdf-kube-api-access-hhvcl\") pod \"cert-manager-webhook-7894b5b9b4-xk44r\" (UID: \"ddc7c180-968f-4f54-8169-960671c0cfdf\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-xk44r" Dec 12 14:24:53 crc kubenswrapper[5108]: I1212 14:24:53.560780 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-7894b5b9b4-xk44r" Dec 12 14:24:54 crc kubenswrapper[5108]: I1212 14:24:54.001399 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-7894b5b9b4-xk44r"] Dec 12 14:24:54 crc kubenswrapper[5108]: I1212 14:24:54.259468 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"256861da-f1b4-46d9-b253-050523c6398f","Type":"ContainerStarted","Data":"ff9bbec8b2f2ac86ab3466ce1f8578ed7fc6516cfac31cf6685afdce84670888"} Dec 12 14:24:54 crc kubenswrapper[5108]: I1212 14:24:54.260999 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-7894b5b9b4-xk44r" event={"ID":"ddc7c180-968f-4f54-8169-960671c0cfdf","Type":"ContainerStarted","Data":"895e15a83aa8dbaad78d99bbcf317dba386a999bb91f3624521f02931174a7c0"} Dec 12 14:24:55 crc kubenswrapper[5108]: I1212 14:24:55.269004 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jllpt" event={"ID":"bb653326-865b-4f87-9a94-72cea19d0a24","Type":"ContainerStarted","Data":"7d35d6fca9d73f279a31a8a8150614b22d7fa755828b01c654f76221f035fed2"} Dec 12 14:24:55 crc kubenswrapper[5108]: I1212 14:24:55.271447 5108 generic.go:358] "Generic (PLEG): container finished" podID="256861da-f1b4-46d9-b253-050523c6398f" containerID="ff9bbec8b2f2ac86ab3466ce1f8578ed7fc6516cfac31cf6685afdce84670888" exitCode=0 Dec 12 14:24:55 crc kubenswrapper[5108]: I1212 14:24:55.271511 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"256861da-f1b4-46d9-b253-050523c6398f","Type":"ContainerDied","Data":"ff9bbec8b2f2ac86ab3466ce1f8578ed7fc6516cfac31cf6685afdce84670888"} Dec 12 14:24:55 crc kubenswrapper[5108]: I1212 14:24:55.293654 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-jllpt" podStartSLOduration=12.565242274 podStartE2EDuration="13.293634465s" podCreationTimestamp="2025-12-12 14:24:42 +0000 UTC" firstStartedPulling="2025-12-12 14:24:50.221042554 +0000 UTC m=+843.129033713" lastFinishedPulling="2025-12-12 14:24:50.949434745 +0000 UTC m=+843.857425904" observedRunningTime="2025-12-12 14:24:55.289568226 +0000 UTC m=+848.197559385" watchObservedRunningTime="2025-12-12 14:24:55.293634465 +0000 UTC m=+848.201625614" Dec 12 14:24:56 crc kubenswrapper[5108]: I1212 14:24:56.281204 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"256861da-f1b4-46d9-b253-050523c6398f","Type":"ContainerStarted","Data":"263a166d9b7b698285f5d0db588371f4d04d2da5179c7b5a049f1fcc3111ae62"} Dec 12 14:24:56 crc kubenswrapper[5108]: I1212 14:24:56.281417 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="service-telemetry/elasticsearch-es-default-0" Dec 12 14:24:56 crc kubenswrapper[5108]: I1212 14:24:56.315931 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/elasticsearch-es-default-0" podStartSLOduration=7.970643508 podStartE2EDuration="28.315907487s" podCreationTimestamp="2025-12-12 14:24:28 +0000 UTC" firstStartedPulling="2025-12-12 14:24:28.970256353 +0000 UTC m=+821.878247512" lastFinishedPulling="2025-12-12 14:24:49.315520322 +0000 UTC m=+842.223511491" observedRunningTime="2025-12-12 14:24:56.314491629 +0000 UTC m=+849.222482808" watchObservedRunningTime="2025-12-12 14:24:56.315907487 +0000 UTC m=+849.223898646" Dec 12 14:24:57 crc kubenswrapper[5108]: I1212 14:24:57.093023 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/infrawatch-operators-qjff6"] Dec 12 14:24:58 crc kubenswrapper[5108]: I1212 14:24:58.877251 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-qjff6"] Dec 12 14:24:58 crc kubenswrapper[5108]: I1212 14:24:58.877768 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-qjff6" Dec 12 14:24:58 crc kubenswrapper[5108]: I1212 14:24:58.882343 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"infrawatch-operators-dockercfg-mdl9r\"" Dec 12 14:24:58 crc kubenswrapper[5108]: I1212 14:24:58.887645 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-7dbf76d5c8-tpbc2"] Dec 12 14:24:58 crc kubenswrapper[5108]: I1212 14:24:58.954534 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6ch7\" (UniqueName: \"kubernetes.io/projected/aa6c5ee2-a726-4a5d-8708-574c15635599-kube-api-access-p6ch7\") pod \"infrawatch-operators-qjff6\" (UID: \"aa6c5ee2-a726-4a5d-8708-574c15635599\") " pod="service-telemetry/infrawatch-operators-qjff6" Dec 12 14:24:59 crc kubenswrapper[5108]: I1212 14:24:59.056122 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-p6ch7\" (UniqueName: \"kubernetes.io/projected/aa6c5ee2-a726-4a5d-8708-574c15635599-kube-api-access-p6ch7\") pod \"infrawatch-operators-qjff6\" (UID: \"aa6c5ee2-a726-4a5d-8708-574c15635599\") " pod="service-telemetry/infrawatch-operators-qjff6" Dec 12 14:24:59 crc kubenswrapper[5108]: I1212 14:24:59.074910 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6ch7\" (UniqueName: \"kubernetes.io/projected/aa6c5ee2-a726-4a5d-8708-574c15635599-kube-api-access-p6ch7\") pod \"infrawatch-operators-qjff6\" (UID: \"aa6c5ee2-a726-4a5d-8708-574c15635599\") " pod="service-telemetry/infrawatch-operators-qjff6" Dec 12 14:24:59 crc kubenswrapper[5108]: I1212 14:24:59.141174 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7dbf76d5c8-tpbc2"] Dec 12 14:24:59 crc kubenswrapper[5108]: I1212 14:24:59.141368 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-tpbc2" Dec 12 14:24:59 crc kubenswrapper[5108]: I1212 14:24:59.143617 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-cainjector-dockercfg-67dx4\"" Dec 12 14:24:59 crc kubenswrapper[5108]: I1212 14:24:59.258569 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9dhn\" (UniqueName: \"kubernetes.io/projected/8f4c26f8-7f51-4731-b947-f238c56b2659-kube-api-access-q9dhn\") pod \"cert-manager-cainjector-7dbf76d5c8-tpbc2\" (UID: \"8f4c26f8-7f51-4731-b947-f238c56b2659\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-tpbc2" Dec 12 14:24:59 crc kubenswrapper[5108]: I1212 14:24:59.258821 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f4c26f8-7f51-4731-b947-f238c56b2659-bound-sa-token\") pod \"cert-manager-cainjector-7dbf76d5c8-tpbc2\" (UID: \"8f4c26f8-7f51-4731-b947-f238c56b2659\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-tpbc2" Dec 12 14:24:59 crc kubenswrapper[5108]: I1212 14:24:59.262016 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-qjff6" Dec 12 14:24:59 crc kubenswrapper[5108]: I1212 14:24:59.360399 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f4c26f8-7f51-4731-b947-f238c56b2659-bound-sa-token\") pod \"cert-manager-cainjector-7dbf76d5c8-tpbc2\" (UID: \"8f4c26f8-7f51-4731-b947-f238c56b2659\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-tpbc2" Dec 12 14:24:59 crc kubenswrapper[5108]: I1212 14:24:59.360458 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-q9dhn\" (UniqueName: \"kubernetes.io/projected/8f4c26f8-7f51-4731-b947-f238c56b2659-kube-api-access-q9dhn\") pod \"cert-manager-cainjector-7dbf76d5c8-tpbc2\" (UID: \"8f4c26f8-7f51-4731-b947-f238c56b2659\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-tpbc2" Dec 12 14:24:59 crc kubenswrapper[5108]: I1212 14:24:59.377890 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f4c26f8-7f51-4731-b947-f238c56b2659-bound-sa-token\") pod \"cert-manager-cainjector-7dbf76d5c8-tpbc2\" (UID: \"8f4c26f8-7f51-4731-b947-f238c56b2659\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-tpbc2" Dec 12 14:24:59 crc kubenswrapper[5108]: I1212 14:24:59.382734 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9dhn\" (UniqueName: \"kubernetes.io/projected/8f4c26f8-7f51-4731-b947-f238c56b2659-kube-api-access-q9dhn\") pod \"cert-manager-cainjector-7dbf76d5c8-tpbc2\" (UID: \"8f4c26f8-7f51-4731-b947-f238c56b2659\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-tpbc2" Dec 12 14:24:59 crc kubenswrapper[5108]: I1212 14:24:59.461241 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-tpbc2" Dec 12 14:25:03 crc kubenswrapper[5108]: I1212 14:25:03.967291 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-jllpt" Dec 12 14:25:03 crc kubenswrapper[5108]: I1212 14:25:03.969344 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-jllpt" Dec 12 14:25:04 crc kubenswrapper[5108]: I1212 14:25:04.147817 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-jllpt" Dec 12 14:25:04 crc kubenswrapper[5108]: I1212 14:25:04.419550 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-jllpt" Dec 12 14:25:09 crc kubenswrapper[5108]: I1212 14:25:08.997591 5108 prober.go:120] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="256861da-f1b4-46d9-b253-050523c6398f" containerName="elasticsearch" probeResult="failure" output=< Dec 12 14:25:09 crc kubenswrapper[5108]: {"timestamp": "2025-12-12T14:25:08+00:00", "message": "readiness probe failed", "curl_rc": "7"} Dec 12 14:25:09 crc kubenswrapper[5108]: > Dec 12 14:25:09 crc kubenswrapper[5108]: I1212 14:25:09.248725 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7dbf76d5c8-tpbc2"] Dec 12 14:25:09 crc kubenswrapper[5108]: I1212 14:25:09.292398 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jllpt"] Dec 12 14:25:09 crc kubenswrapper[5108]: I1212 14:25:09.298218 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-qjff6"] Dec 12 14:25:09 crc kubenswrapper[5108]: W1212 14:25:09.303257 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaa6c5ee2_a726_4a5d_8708_574c15635599.slice/crio-2299645f8ce18098bbd68b463002ec060195eb83a35bf9a36fd9a38ff91fb981 WatchSource:0}: Error finding container 2299645f8ce18098bbd68b463002ec060195eb83a35bf9a36fd9a38ff91fb981: Status 404 returned error can't find the container with id 2299645f8ce18098bbd68b463002ec060195eb83a35bf9a36fd9a38ff91fb981 Dec 12 14:25:09 crc kubenswrapper[5108]: I1212 14:25:09.351903 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-tpbc2" event={"ID":"8f4c26f8-7f51-4731-b947-f238c56b2659","Type":"ContainerStarted","Data":"3a8da7876b405f1e9de7b551e5f1b154c9447a42ad9bb3913eb28e309223cc64"} Dec 12 14:25:09 crc kubenswrapper[5108]: I1212 14:25:09.353907 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-7894b5b9b4-xk44r" event={"ID":"ddc7c180-968f-4f54-8169-960671c0cfdf","Type":"ContainerStarted","Data":"e1c82607ec49bdcd687dfad843ce89687e7605bc64e889e3807dd15beac81a05"} Dec 12 14:25:09 crc kubenswrapper[5108]: I1212 14:25:09.354892 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-qjff6" event={"ID":"aa6c5ee2-a726-4a5d-8708-574c15635599","Type":"ContainerStarted","Data":"2299645f8ce18098bbd68b463002ec060195eb83a35bf9a36fd9a38ff91fb981"} Dec 12 14:25:09 crc kubenswrapper[5108]: I1212 14:25:09.537470 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-jllpt" podUID="bb653326-865b-4f87-9a94-72cea19d0a24" containerName="registry-server" containerID="cri-o://7d35d6fca9d73f279a31a8a8150614b22d7fa755828b01c654f76221f035fed2" gracePeriod=2 Dec 12 14:25:09 crc kubenswrapper[5108]: I1212 14:25:09.560805 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-7894b5b9b4-xk44r" podStartSLOduration=2.8590726699999998 podStartE2EDuration="17.560788304s" podCreationTimestamp="2025-12-12 14:24:52 +0000 UTC" firstStartedPulling="2025-12-12 14:24:54.016525551 +0000 UTC m=+846.924516720" lastFinishedPulling="2025-12-12 14:25:08.718241195 +0000 UTC m=+861.626232354" observedRunningTime="2025-12-12 14:25:09.55394948 +0000 UTC m=+862.461940679" watchObservedRunningTime="2025-12-12 14:25:09.560788304 +0000 UTC m=+862.468779463" Dec 12 14:25:10 crc kubenswrapper[5108]: I1212 14:25:10.363339 5108 generic.go:358] "Generic (PLEG): container finished" podID="bb653326-865b-4f87-9a94-72cea19d0a24" containerID="7d35d6fca9d73f279a31a8a8150614b22d7fa755828b01c654f76221f035fed2" exitCode=0 Dec 12 14:25:10 crc kubenswrapper[5108]: I1212 14:25:10.363744 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jllpt" event={"ID":"bb653326-865b-4f87-9a94-72cea19d0a24","Type":"ContainerDied","Data":"7d35d6fca9d73f279a31a8a8150614b22d7fa755828b01c654f76221f035fed2"} Dec 12 14:25:10 crc kubenswrapper[5108]: I1212 14:25:10.363857 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="cert-manager/cert-manager-webhook-7894b5b9b4-xk44r" Dec 12 14:25:10 crc kubenswrapper[5108]: I1212 14:25:10.891489 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jllpt" Dec 12 14:25:10 crc kubenswrapper[5108]: I1212 14:25:10.930284 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb653326-865b-4f87-9a94-72cea19d0a24-utilities\") pod \"bb653326-865b-4f87-9a94-72cea19d0a24\" (UID: \"bb653326-865b-4f87-9a94-72cea19d0a24\") " Dec 12 14:25:10 crc kubenswrapper[5108]: I1212 14:25:10.930382 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w6ftd\" (UniqueName: \"kubernetes.io/projected/bb653326-865b-4f87-9a94-72cea19d0a24-kube-api-access-w6ftd\") pod \"bb653326-865b-4f87-9a94-72cea19d0a24\" (UID: \"bb653326-865b-4f87-9a94-72cea19d0a24\") " Dec 12 14:25:10 crc kubenswrapper[5108]: I1212 14:25:10.930491 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb653326-865b-4f87-9a94-72cea19d0a24-catalog-content\") pod \"bb653326-865b-4f87-9a94-72cea19d0a24\" (UID: \"bb653326-865b-4f87-9a94-72cea19d0a24\") " Dec 12 14:25:10 crc kubenswrapper[5108]: I1212 14:25:10.931327 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb653326-865b-4f87-9a94-72cea19d0a24-utilities" (OuterVolumeSpecName: "utilities") pod "bb653326-865b-4f87-9a94-72cea19d0a24" (UID: "bb653326-865b-4f87-9a94-72cea19d0a24"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:25:10 crc kubenswrapper[5108]: I1212 14:25:10.937502 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb653326-865b-4f87-9a94-72cea19d0a24-kube-api-access-w6ftd" (OuterVolumeSpecName: "kube-api-access-w6ftd") pod "bb653326-865b-4f87-9a94-72cea19d0a24" (UID: "bb653326-865b-4f87-9a94-72cea19d0a24"). InnerVolumeSpecName "kube-api-access-w6ftd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:25:10 crc kubenswrapper[5108]: I1212 14:25:10.985136 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb653326-865b-4f87-9a94-72cea19d0a24-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bb653326-865b-4f87-9a94-72cea19d0a24" (UID: "bb653326-865b-4f87-9a94-72cea19d0a24"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:25:11 crc kubenswrapper[5108]: I1212 14:25:11.032414 5108 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb653326-865b-4f87-9a94-72cea19d0a24-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 14:25:11 crc kubenswrapper[5108]: I1212 14:25:11.032456 5108 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb653326-865b-4f87-9a94-72cea19d0a24-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 14:25:11 crc kubenswrapper[5108]: I1212 14:25:11.032469 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-w6ftd\" (UniqueName: \"kubernetes.io/projected/bb653326-865b-4f87-9a94-72cea19d0a24-kube-api-access-w6ftd\") on node \"crc\" DevicePath \"\"" Dec 12 14:25:11 crc kubenswrapper[5108]: I1212 14:25:11.079116 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858d87f86b-6bvxl"] Dec 12 14:25:11 crc kubenswrapper[5108]: I1212 14:25:11.079921 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="bb653326-865b-4f87-9a94-72cea19d0a24" containerName="extract-utilities" Dec 12 14:25:11 crc kubenswrapper[5108]: I1212 14:25:11.079941 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb653326-865b-4f87-9a94-72cea19d0a24" containerName="extract-utilities" Dec 12 14:25:11 crc kubenswrapper[5108]: I1212 14:25:11.079952 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="bb653326-865b-4f87-9a94-72cea19d0a24" containerName="extract-content" Dec 12 14:25:11 crc kubenswrapper[5108]: I1212 14:25:11.079959 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb653326-865b-4f87-9a94-72cea19d0a24" containerName="extract-content" Dec 12 14:25:11 crc kubenswrapper[5108]: I1212 14:25:11.079990 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="bb653326-865b-4f87-9a94-72cea19d0a24" containerName="registry-server" Dec 12 14:25:11 crc kubenswrapper[5108]: I1212 14:25:11.079997 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb653326-865b-4f87-9a94-72cea19d0a24" containerName="registry-server" Dec 12 14:25:11 crc kubenswrapper[5108]: I1212 14:25:11.080138 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="bb653326-865b-4f87-9a94-72cea19d0a24" containerName="registry-server" Dec 12 14:25:11 crc kubenswrapper[5108]: I1212 14:25:11.097334 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858d87f86b-6bvxl"] Dec 12 14:25:11 crc kubenswrapper[5108]: I1212 14:25:11.097480 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858d87f86b-6bvxl" Dec 12 14:25:11 crc kubenswrapper[5108]: I1212 14:25:11.102652 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-dockercfg-csvg7\"" Dec 12 14:25:11 crc kubenswrapper[5108]: I1212 14:25:11.133207 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74vn7\" (UniqueName: \"kubernetes.io/projected/aa17ce04-523a-4162-9187-e6bf4485e7c3-kube-api-access-74vn7\") pod \"cert-manager-858d87f86b-6bvxl\" (UID: \"aa17ce04-523a-4162-9187-e6bf4485e7c3\") " pod="cert-manager/cert-manager-858d87f86b-6bvxl" Dec 12 14:25:11 crc kubenswrapper[5108]: I1212 14:25:11.133345 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/aa17ce04-523a-4162-9187-e6bf4485e7c3-bound-sa-token\") pod \"cert-manager-858d87f86b-6bvxl\" (UID: \"aa17ce04-523a-4162-9187-e6bf4485e7c3\") " pod="cert-manager/cert-manager-858d87f86b-6bvxl" Dec 12 14:25:11 crc kubenswrapper[5108]: I1212 14:25:11.235983 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-74vn7\" (UniqueName: \"kubernetes.io/projected/aa17ce04-523a-4162-9187-e6bf4485e7c3-kube-api-access-74vn7\") pod \"cert-manager-858d87f86b-6bvxl\" (UID: \"aa17ce04-523a-4162-9187-e6bf4485e7c3\") " pod="cert-manager/cert-manager-858d87f86b-6bvxl" Dec 12 14:25:11 crc kubenswrapper[5108]: I1212 14:25:11.236119 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/aa17ce04-523a-4162-9187-e6bf4485e7c3-bound-sa-token\") pod \"cert-manager-858d87f86b-6bvxl\" (UID: \"aa17ce04-523a-4162-9187-e6bf4485e7c3\") " pod="cert-manager/cert-manager-858d87f86b-6bvxl" Dec 12 14:25:11 crc kubenswrapper[5108]: I1212 14:25:11.258367 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-74vn7\" (UniqueName: \"kubernetes.io/projected/aa17ce04-523a-4162-9187-e6bf4485e7c3-kube-api-access-74vn7\") pod \"cert-manager-858d87f86b-6bvxl\" (UID: \"aa17ce04-523a-4162-9187-e6bf4485e7c3\") " pod="cert-manager/cert-manager-858d87f86b-6bvxl" Dec 12 14:25:11 crc kubenswrapper[5108]: I1212 14:25:11.273887 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/aa17ce04-523a-4162-9187-e6bf4485e7c3-bound-sa-token\") pod \"cert-manager-858d87f86b-6bvxl\" (UID: \"aa17ce04-523a-4162-9187-e6bf4485e7c3\") " pod="cert-manager/cert-manager-858d87f86b-6bvxl" Dec 12 14:25:11 crc kubenswrapper[5108]: I1212 14:25:11.371854 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-tpbc2" event={"ID":"8f4c26f8-7f51-4731-b947-f238c56b2659","Type":"ContainerStarted","Data":"a83afc230f6f5a40ba5a2fbca068da0ad78dfd90e9f135e2915e186eef1b48fc"} Dec 12 14:25:11 crc kubenswrapper[5108]: I1212 14:25:11.376457 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jllpt" Dec 12 14:25:11 crc kubenswrapper[5108]: I1212 14:25:11.389345 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jllpt" event={"ID":"bb653326-865b-4f87-9a94-72cea19d0a24","Type":"ContainerDied","Data":"5068607150d3de84a5599d1537feba891f31da9ee05d3348330a52e71abd2ba6"} Dec 12 14:25:11 crc kubenswrapper[5108]: I1212 14:25:11.389422 5108 scope.go:117] "RemoveContainer" containerID="7d35d6fca9d73f279a31a8a8150614b22d7fa755828b01c654f76221f035fed2" Dec 12 14:25:11 crc kubenswrapper[5108]: I1212 14:25:11.394420 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-tpbc2" podStartSLOduration=13.394400981 podStartE2EDuration="13.394400981s" podCreationTimestamp="2025-12-12 14:24:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 14:25:11.389760446 +0000 UTC m=+864.297751625" watchObservedRunningTime="2025-12-12 14:25:11.394400981 +0000 UTC m=+864.302392140" Dec 12 14:25:11 crc kubenswrapper[5108]: I1212 14:25:11.416748 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858d87f86b-6bvxl" Dec 12 14:25:11 crc kubenswrapper[5108]: I1212 14:25:11.431314 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jllpt"] Dec 12 14:25:11 crc kubenswrapper[5108]: I1212 14:25:11.436760 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-jllpt"] Dec 12 14:25:11 crc kubenswrapper[5108]: I1212 14:25:11.660103 5108 scope.go:117] "RemoveContainer" containerID="0cc573aa0b5cacad70622fca6c6d8678599c89e4b658ad51d1d61764e36532f7" Dec 12 14:25:11 crc kubenswrapper[5108]: I1212 14:25:11.679368 5108 scope.go:117] "RemoveContainer" containerID="fbeec747efa7f12739b6bcecfa7369a0211a0aae153999ed9af5f065c76675ee" Dec 12 14:25:12 crc kubenswrapper[5108]: I1212 14:25:12.187208 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858d87f86b-6bvxl"] Dec 12 14:25:13 crc kubenswrapper[5108]: I1212 14:25:13.391935 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858d87f86b-6bvxl" event={"ID":"aa17ce04-523a-4162-9187-e6bf4485e7c3","Type":"ContainerStarted","Data":"2a50d6c039b3499d72f524dac02a5487833d16d1a21b81ad12c739cbb94df7fd"} Dec 12 14:25:13 crc kubenswrapper[5108]: I1212 14:25:13.392330 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858d87f86b-6bvxl" event={"ID":"aa17ce04-523a-4162-9187-e6bf4485e7c3","Type":"ContainerStarted","Data":"7714d43943070e88493031479030f15ff016cd7d0999c83b6bebfbae5c7fdf39"} Dec 12 14:25:13 crc kubenswrapper[5108]: I1212 14:25:13.394049 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-qjff6" event={"ID":"aa6c5ee2-a726-4a5d-8708-574c15635599","Type":"ContainerStarted","Data":"3bbdc392692a8820d0d556e352704465e555379f5bf75f82b5e24656bdc5e630"} Dec 12 14:25:13 crc kubenswrapper[5108]: I1212 14:25:13.416141 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb653326-865b-4f87-9a94-72cea19d0a24" path="/var/lib/kubelet/pods/bb653326-865b-4f87-9a94-72cea19d0a24/volumes" Dec 12 14:25:13 crc kubenswrapper[5108]: I1212 14:25:13.430414 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/infrawatch-operators-qjff6" podStartSLOduration=12.664425232 podStartE2EDuration="16.430393257s" podCreationTimestamp="2025-12-12 14:24:57 +0000 UTC" firstStartedPulling="2025-12-12 14:25:09.305449047 +0000 UTC m=+862.213440206" lastFinishedPulling="2025-12-12 14:25:13.071417082 +0000 UTC m=+865.979408231" observedRunningTime="2025-12-12 14:25:13.429212405 +0000 UTC m=+866.337203584" watchObservedRunningTime="2025-12-12 14:25:13.430393257 +0000 UTC m=+866.338384416" Dec 12 14:25:13 crc kubenswrapper[5108]: I1212 14:25:13.432099 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858d87f86b-6bvxl" podStartSLOduration=2.432088212 podStartE2EDuration="2.432088212s" podCreationTimestamp="2025-12-12 14:25:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 14:25:13.41162004 +0000 UTC m=+866.319611199" watchObservedRunningTime="2025-12-12 14:25:13.432088212 +0000 UTC m=+866.340079371" Dec 12 14:25:13 crc kubenswrapper[5108]: I1212 14:25:13.943633 5108 prober.go:120] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="256861da-f1b4-46d9-b253-050523c6398f" containerName="elasticsearch" probeResult="failure" output=< Dec 12 14:25:13 crc kubenswrapper[5108]: {"timestamp": "2025-12-12T14:25:13+00:00", "message": "readiness probe failed", "curl_rc": "7"} Dec 12 14:25:13 crc kubenswrapper[5108]: > Dec 12 14:25:16 crc kubenswrapper[5108]: I1212 14:25:16.378702 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-7894b5b9b4-xk44r" Dec 12 14:25:18 crc kubenswrapper[5108]: I1212 14:25:18.938700 5108 prober.go:120] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="256861da-f1b4-46d9-b253-050523c6398f" containerName="elasticsearch" probeResult="failure" output=< Dec 12 14:25:18 crc kubenswrapper[5108]: {"timestamp": "2025-12-12T14:25:18+00:00", "message": "readiness probe failed", "curl_rc": "7"} Dec 12 14:25:18 crc kubenswrapper[5108]: > Dec 12 14:25:19 crc kubenswrapper[5108]: I1212 14:25:19.262703 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="service-telemetry/infrawatch-operators-qjff6" Dec 12 14:25:19 crc kubenswrapper[5108]: I1212 14:25:19.262776 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="service-telemetry/infrawatch-operators-qjff6" Dec 12 14:25:19 crc kubenswrapper[5108]: I1212 14:25:19.287624 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="service-telemetry/infrawatch-operators-qjff6" Dec 12 14:25:19 crc kubenswrapper[5108]: I1212 14:25:19.451189 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/infrawatch-operators-qjff6" Dec 12 14:25:19 crc kubenswrapper[5108]: I1212 14:25:19.986518 5108 patch_prober.go:28] interesting pod/machine-config-daemon-w294k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 14:25:19 crc kubenswrapper[5108]: I1212 14:25:19.986608 5108 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w294k" podUID="fcb30c12-8b29-461d-ab3e-a76577b664d6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 14:25:24 crc kubenswrapper[5108]: I1212 14:25:24.327905 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/elasticsearch-es-default-0" Dec 12 14:25:32 crc kubenswrapper[5108]: I1212 14:25:32.954321 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/8ee8c55189da7090d5ccec56827ddc8c72a44413647227f2ae0842e747wvzkc"] Dec 12 14:25:32 crc kubenswrapper[5108]: I1212 14:25:32.966970 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/8ee8c55189da7090d5ccec56827ddc8c72a44413647227f2ae0842e747wvzkc"] Dec 12 14:25:32 crc kubenswrapper[5108]: I1212 14:25:32.967123 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/8ee8c55189da7090d5ccec56827ddc8c72a44413647227f2ae0842e747wvzkc" Dec 12 14:25:33 crc kubenswrapper[5108]: I1212 14:25:33.058733 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/403914e9-c5c4-4b82-bc5e-1eb93a75e19a-util\") pod \"8ee8c55189da7090d5ccec56827ddc8c72a44413647227f2ae0842e747wvzkc\" (UID: \"403914e9-c5c4-4b82-bc5e-1eb93a75e19a\") " pod="service-telemetry/8ee8c55189da7090d5ccec56827ddc8c72a44413647227f2ae0842e747wvzkc" Dec 12 14:25:33 crc kubenswrapper[5108]: I1212 14:25:33.058988 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68bxz\" (UniqueName: \"kubernetes.io/projected/403914e9-c5c4-4b82-bc5e-1eb93a75e19a-kube-api-access-68bxz\") pod \"8ee8c55189da7090d5ccec56827ddc8c72a44413647227f2ae0842e747wvzkc\" (UID: \"403914e9-c5c4-4b82-bc5e-1eb93a75e19a\") " pod="service-telemetry/8ee8c55189da7090d5ccec56827ddc8c72a44413647227f2ae0842e747wvzkc" Dec 12 14:25:33 crc kubenswrapper[5108]: I1212 14:25:33.059037 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/403914e9-c5c4-4b82-bc5e-1eb93a75e19a-bundle\") pod \"8ee8c55189da7090d5ccec56827ddc8c72a44413647227f2ae0842e747wvzkc\" (UID: \"403914e9-c5c4-4b82-bc5e-1eb93a75e19a\") " pod="service-telemetry/8ee8c55189da7090d5ccec56827ddc8c72a44413647227f2ae0842e747wvzkc" Dec 12 14:25:33 crc kubenswrapper[5108]: I1212 14:25:33.160477 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-68bxz\" (UniqueName: \"kubernetes.io/projected/403914e9-c5c4-4b82-bc5e-1eb93a75e19a-kube-api-access-68bxz\") pod \"8ee8c55189da7090d5ccec56827ddc8c72a44413647227f2ae0842e747wvzkc\" (UID: \"403914e9-c5c4-4b82-bc5e-1eb93a75e19a\") " pod="service-telemetry/8ee8c55189da7090d5ccec56827ddc8c72a44413647227f2ae0842e747wvzkc" Dec 12 14:25:33 crc kubenswrapper[5108]: I1212 14:25:33.160540 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/403914e9-c5c4-4b82-bc5e-1eb93a75e19a-bundle\") pod \"8ee8c55189da7090d5ccec56827ddc8c72a44413647227f2ae0842e747wvzkc\" (UID: \"403914e9-c5c4-4b82-bc5e-1eb93a75e19a\") " pod="service-telemetry/8ee8c55189da7090d5ccec56827ddc8c72a44413647227f2ae0842e747wvzkc" Dec 12 14:25:33 crc kubenswrapper[5108]: I1212 14:25:33.160577 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/403914e9-c5c4-4b82-bc5e-1eb93a75e19a-util\") pod \"8ee8c55189da7090d5ccec56827ddc8c72a44413647227f2ae0842e747wvzkc\" (UID: \"403914e9-c5c4-4b82-bc5e-1eb93a75e19a\") " pod="service-telemetry/8ee8c55189da7090d5ccec56827ddc8c72a44413647227f2ae0842e747wvzkc" Dec 12 14:25:33 crc kubenswrapper[5108]: I1212 14:25:33.161047 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/403914e9-c5c4-4b82-bc5e-1eb93a75e19a-bundle\") pod \"8ee8c55189da7090d5ccec56827ddc8c72a44413647227f2ae0842e747wvzkc\" (UID: \"403914e9-c5c4-4b82-bc5e-1eb93a75e19a\") " pod="service-telemetry/8ee8c55189da7090d5ccec56827ddc8c72a44413647227f2ae0842e747wvzkc" Dec 12 14:25:33 crc kubenswrapper[5108]: I1212 14:25:33.161138 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/403914e9-c5c4-4b82-bc5e-1eb93a75e19a-util\") pod \"8ee8c55189da7090d5ccec56827ddc8c72a44413647227f2ae0842e747wvzkc\" (UID: \"403914e9-c5c4-4b82-bc5e-1eb93a75e19a\") " pod="service-telemetry/8ee8c55189da7090d5ccec56827ddc8c72a44413647227f2ae0842e747wvzkc" Dec 12 14:25:33 crc kubenswrapper[5108]: I1212 14:25:33.181840 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-68bxz\" (UniqueName: \"kubernetes.io/projected/403914e9-c5c4-4b82-bc5e-1eb93a75e19a-kube-api-access-68bxz\") pod \"8ee8c55189da7090d5ccec56827ddc8c72a44413647227f2ae0842e747wvzkc\" (UID: \"403914e9-c5c4-4b82-bc5e-1eb93a75e19a\") " pod="service-telemetry/8ee8c55189da7090d5ccec56827ddc8c72a44413647227f2ae0842e747wvzkc" Dec 12 14:25:33 crc kubenswrapper[5108]: I1212 14:25:33.285357 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/8ee8c55189da7090d5ccec56827ddc8c72a44413647227f2ae0842e747wvzkc" Dec 12 14:25:33 crc kubenswrapper[5108]: I1212 14:25:33.590738 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/8ee8c55189da7090d5ccec56827ddc8c72a44413647227f2ae0842e747wvzkc"] Dec 12 14:25:33 crc kubenswrapper[5108]: I1212 14:25:33.735826 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8flcvls"] Dec 12 14:25:33 crc kubenswrapper[5108]: I1212 14:25:33.741056 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8flcvls" Dec 12 14:25:33 crc kubenswrapper[5108]: I1212 14:25:33.745510 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"default-dockercfg-b2ccr\"" Dec 12 14:25:33 crc kubenswrapper[5108]: I1212 14:25:33.749987 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8flcvls"] Dec 12 14:25:33 crc kubenswrapper[5108]: I1212 14:25:33.873116 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zz26d\" (UniqueName: \"kubernetes.io/projected/07f7b2af-30f8-49c2-895e-73628bf6158d-kube-api-access-zz26d\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8flcvls\" (UID: \"07f7b2af-30f8-49c2-895e-73628bf6158d\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8flcvls" Dec 12 14:25:33 crc kubenswrapper[5108]: I1212 14:25:33.873317 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/07f7b2af-30f8-49c2-895e-73628bf6158d-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8flcvls\" (UID: \"07f7b2af-30f8-49c2-895e-73628bf6158d\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8flcvls" Dec 12 14:25:33 crc kubenswrapper[5108]: I1212 14:25:33.873452 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/07f7b2af-30f8-49c2-895e-73628bf6158d-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8flcvls\" (UID: \"07f7b2af-30f8-49c2-895e-73628bf6158d\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8flcvls" Dec 12 14:25:33 crc kubenswrapper[5108]: I1212 14:25:33.974580 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/07f7b2af-30f8-49c2-895e-73628bf6158d-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8flcvls\" (UID: \"07f7b2af-30f8-49c2-895e-73628bf6158d\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8flcvls" Dec 12 14:25:33 crc kubenswrapper[5108]: I1212 14:25:33.974634 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zz26d\" (UniqueName: \"kubernetes.io/projected/07f7b2af-30f8-49c2-895e-73628bf6158d-kube-api-access-zz26d\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8flcvls\" (UID: \"07f7b2af-30f8-49c2-895e-73628bf6158d\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8flcvls" Dec 12 14:25:33 crc kubenswrapper[5108]: I1212 14:25:33.974708 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/07f7b2af-30f8-49c2-895e-73628bf6158d-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8flcvls\" (UID: \"07f7b2af-30f8-49c2-895e-73628bf6158d\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8flcvls" Dec 12 14:25:33 crc kubenswrapper[5108]: I1212 14:25:33.975071 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/07f7b2af-30f8-49c2-895e-73628bf6158d-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8flcvls\" (UID: \"07f7b2af-30f8-49c2-895e-73628bf6158d\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8flcvls" Dec 12 14:25:33 crc kubenswrapper[5108]: I1212 14:25:33.975198 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/07f7b2af-30f8-49c2-895e-73628bf6158d-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8flcvls\" (UID: \"07f7b2af-30f8-49c2-895e-73628bf6158d\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8flcvls" Dec 12 14:25:33 crc kubenswrapper[5108]: I1212 14:25:33.995232 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zz26d\" (UniqueName: \"kubernetes.io/projected/07f7b2af-30f8-49c2-895e-73628bf6158d-kube-api-access-zz26d\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8flcvls\" (UID: \"07f7b2af-30f8-49c2-895e-73628bf6158d\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8flcvls" Dec 12 14:25:34 crc kubenswrapper[5108]: I1212 14:25:34.073983 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8flcvls" Dec 12 14:25:34 crc kubenswrapper[5108]: I1212 14:25:34.494440 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8flcvls"] Dec 12 14:25:34 crc kubenswrapper[5108]: W1212 14:25:34.507685 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod07f7b2af_30f8_49c2_895e_73628bf6158d.slice/crio-3bec00cbfefe325d6883192e505e5a33b2fd42bd7a2124b87361fe450390a6d5 WatchSource:0}: Error finding container 3bec00cbfefe325d6883192e505e5a33b2fd42bd7a2124b87361fe450390a6d5: Status 404 returned error can't find the container with id 3bec00cbfefe325d6883192e505e5a33b2fd42bd7a2124b87361fe450390a6d5 Dec 12 14:25:34 crc kubenswrapper[5108]: I1212 14:25:34.543689 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8flcvls" event={"ID":"07f7b2af-30f8-49c2-895e-73628bf6158d","Type":"ContainerStarted","Data":"3bec00cbfefe325d6883192e505e5a33b2fd42bd7a2124b87361fe450390a6d5"} Dec 12 14:25:34 crc kubenswrapper[5108]: I1212 14:25:34.546783 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/8ee8c55189da7090d5ccec56827ddc8c72a44413647227f2ae0842e747wvzkc" event={"ID":"403914e9-c5c4-4b82-bc5e-1eb93a75e19a","Type":"ContainerStarted","Data":"d5549b8ae99f91a238e69b8f7ffcd8a74b9d078370b1a7604a44c71e26b01887"} Dec 12 14:25:34 crc kubenswrapper[5108]: I1212 14:25:34.547033 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/8ee8c55189da7090d5ccec56827ddc8c72a44413647227f2ae0842e747wvzkc" event={"ID":"403914e9-c5c4-4b82-bc5e-1eb93a75e19a","Type":"ContainerStarted","Data":"83403153920dc1d186fe0bbc58168158a8bdddcf297bd59f3615e922d3f93a7e"} Dec 12 14:25:34 crc kubenswrapper[5108]: I1212 14:25:34.732716 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/a37d2feed04fa147ea45c25cb64a74003bcaf6cda2941cce1e3a2ba788k9jfc"] Dec 12 14:25:34 crc kubenswrapper[5108]: I1212 14:25:34.932229 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/a37d2feed04fa147ea45c25cb64a74003bcaf6cda2941cce1e3a2ba788k9jfc"] Dec 12 14:25:34 crc kubenswrapper[5108]: I1212 14:25:34.932428 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/a37d2feed04fa147ea45c25cb64a74003bcaf6cda2941cce1e3a2ba788k9jfc" Dec 12 14:25:35 crc kubenswrapper[5108]: I1212 14:25:35.091781 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c8a5d9e9-9241-4559-bc61-8ed6f5a600dd-util\") pod \"a37d2feed04fa147ea45c25cb64a74003bcaf6cda2941cce1e3a2ba788k9jfc\" (UID: \"c8a5d9e9-9241-4559-bc61-8ed6f5a600dd\") " pod="service-telemetry/a37d2feed04fa147ea45c25cb64a74003bcaf6cda2941cce1e3a2ba788k9jfc" Dec 12 14:25:35 crc kubenswrapper[5108]: I1212 14:25:35.091971 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zmbd\" (UniqueName: \"kubernetes.io/projected/c8a5d9e9-9241-4559-bc61-8ed6f5a600dd-kube-api-access-9zmbd\") pod \"a37d2feed04fa147ea45c25cb64a74003bcaf6cda2941cce1e3a2ba788k9jfc\" (UID: \"c8a5d9e9-9241-4559-bc61-8ed6f5a600dd\") " pod="service-telemetry/a37d2feed04fa147ea45c25cb64a74003bcaf6cda2941cce1e3a2ba788k9jfc" Dec 12 14:25:35 crc kubenswrapper[5108]: I1212 14:25:35.092093 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c8a5d9e9-9241-4559-bc61-8ed6f5a600dd-bundle\") pod \"a37d2feed04fa147ea45c25cb64a74003bcaf6cda2941cce1e3a2ba788k9jfc\" (UID: \"c8a5d9e9-9241-4559-bc61-8ed6f5a600dd\") " pod="service-telemetry/a37d2feed04fa147ea45c25cb64a74003bcaf6cda2941cce1e3a2ba788k9jfc" Dec 12 14:25:35 crc kubenswrapper[5108]: I1212 14:25:35.193771 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c8a5d9e9-9241-4559-bc61-8ed6f5a600dd-bundle\") pod \"a37d2feed04fa147ea45c25cb64a74003bcaf6cda2941cce1e3a2ba788k9jfc\" (UID: \"c8a5d9e9-9241-4559-bc61-8ed6f5a600dd\") " pod="service-telemetry/a37d2feed04fa147ea45c25cb64a74003bcaf6cda2941cce1e3a2ba788k9jfc" Dec 12 14:25:35 crc kubenswrapper[5108]: I1212 14:25:35.193890 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c8a5d9e9-9241-4559-bc61-8ed6f5a600dd-util\") pod \"a37d2feed04fa147ea45c25cb64a74003bcaf6cda2941cce1e3a2ba788k9jfc\" (UID: \"c8a5d9e9-9241-4559-bc61-8ed6f5a600dd\") " pod="service-telemetry/a37d2feed04fa147ea45c25cb64a74003bcaf6cda2941cce1e3a2ba788k9jfc" Dec 12 14:25:35 crc kubenswrapper[5108]: I1212 14:25:35.193953 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9zmbd\" (UniqueName: \"kubernetes.io/projected/c8a5d9e9-9241-4559-bc61-8ed6f5a600dd-kube-api-access-9zmbd\") pod \"a37d2feed04fa147ea45c25cb64a74003bcaf6cda2941cce1e3a2ba788k9jfc\" (UID: \"c8a5d9e9-9241-4559-bc61-8ed6f5a600dd\") " pod="service-telemetry/a37d2feed04fa147ea45c25cb64a74003bcaf6cda2941cce1e3a2ba788k9jfc" Dec 12 14:25:35 crc kubenswrapper[5108]: I1212 14:25:35.194953 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c8a5d9e9-9241-4559-bc61-8ed6f5a600dd-bundle\") pod \"a37d2feed04fa147ea45c25cb64a74003bcaf6cda2941cce1e3a2ba788k9jfc\" (UID: \"c8a5d9e9-9241-4559-bc61-8ed6f5a600dd\") " pod="service-telemetry/a37d2feed04fa147ea45c25cb64a74003bcaf6cda2941cce1e3a2ba788k9jfc" Dec 12 14:25:35 crc kubenswrapper[5108]: I1212 14:25:35.194958 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c8a5d9e9-9241-4559-bc61-8ed6f5a600dd-util\") pod \"a37d2feed04fa147ea45c25cb64a74003bcaf6cda2941cce1e3a2ba788k9jfc\" (UID: \"c8a5d9e9-9241-4559-bc61-8ed6f5a600dd\") " pod="service-telemetry/a37d2feed04fa147ea45c25cb64a74003bcaf6cda2941cce1e3a2ba788k9jfc" Dec 12 14:25:35 crc kubenswrapper[5108]: I1212 14:25:35.216632 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zmbd\" (UniqueName: \"kubernetes.io/projected/c8a5d9e9-9241-4559-bc61-8ed6f5a600dd-kube-api-access-9zmbd\") pod \"a37d2feed04fa147ea45c25cb64a74003bcaf6cda2941cce1e3a2ba788k9jfc\" (UID: \"c8a5d9e9-9241-4559-bc61-8ed6f5a600dd\") " pod="service-telemetry/a37d2feed04fa147ea45c25cb64a74003bcaf6cda2941cce1e3a2ba788k9jfc" Dec 12 14:25:35 crc kubenswrapper[5108]: I1212 14:25:35.249409 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/a37d2feed04fa147ea45c25cb64a74003bcaf6cda2941cce1e3a2ba788k9jfc" Dec 12 14:25:35 crc kubenswrapper[5108]: I1212 14:25:35.563845 5108 generic.go:358] "Generic (PLEG): container finished" podID="403914e9-c5c4-4b82-bc5e-1eb93a75e19a" containerID="d5549b8ae99f91a238e69b8f7ffcd8a74b9d078370b1a7604a44c71e26b01887" exitCode=0 Dec 12 14:25:35 crc kubenswrapper[5108]: I1212 14:25:35.564284 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/8ee8c55189da7090d5ccec56827ddc8c72a44413647227f2ae0842e747wvzkc" event={"ID":"403914e9-c5c4-4b82-bc5e-1eb93a75e19a","Type":"ContainerDied","Data":"d5549b8ae99f91a238e69b8f7ffcd8a74b9d078370b1a7604a44c71e26b01887"} Dec 12 14:25:35 crc kubenswrapper[5108]: I1212 14:25:35.574868 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8flcvls" event={"ID":"07f7b2af-30f8-49c2-895e-73628bf6158d","Type":"ContainerStarted","Data":"bdee19001f56c22edaba1a839108078c978ff1e422e81c8aea08333db474ba69"} Dec 12 14:25:35 crc kubenswrapper[5108]: I1212 14:25:35.890594 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/a37d2feed04fa147ea45c25cb64a74003bcaf6cda2941cce1e3a2ba788k9jfc"] Dec 12 14:25:36 crc kubenswrapper[5108]: I1212 14:25:36.584434 5108 generic.go:358] "Generic (PLEG): container finished" podID="07f7b2af-30f8-49c2-895e-73628bf6158d" containerID="bdee19001f56c22edaba1a839108078c978ff1e422e81c8aea08333db474ba69" exitCode=0 Dec 12 14:25:36 crc kubenswrapper[5108]: I1212 14:25:36.584579 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8flcvls" event={"ID":"07f7b2af-30f8-49c2-895e-73628bf6158d","Type":"ContainerDied","Data":"bdee19001f56c22edaba1a839108078c978ff1e422e81c8aea08333db474ba69"} Dec 12 14:25:36 crc kubenswrapper[5108]: I1212 14:25:36.588358 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/a37d2feed04fa147ea45c25cb64a74003bcaf6cda2941cce1e3a2ba788k9jfc" event={"ID":"c8a5d9e9-9241-4559-bc61-8ed6f5a600dd","Type":"ContainerStarted","Data":"1e7a414d504a7c9a57b5ce44282491124d874863d213d8540f37626dbf674468"} Dec 12 14:25:37 crc kubenswrapper[5108]: I1212 14:25:37.599366 5108 generic.go:358] "Generic (PLEG): container finished" podID="c8a5d9e9-9241-4559-bc61-8ed6f5a600dd" containerID="c55f2312448033d957fba5bad9dc555dc6ee12b75a4b1ef5fa83afbd58316dd9" exitCode=0 Dec 12 14:25:37 crc kubenswrapper[5108]: I1212 14:25:37.599821 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/a37d2feed04fa147ea45c25cb64a74003bcaf6cda2941cce1e3a2ba788k9jfc" event={"ID":"c8a5d9e9-9241-4559-bc61-8ed6f5a600dd","Type":"ContainerDied","Data":"c55f2312448033d957fba5bad9dc555dc6ee12b75a4b1ef5fa83afbd58316dd9"} Dec 12 14:25:37 crc kubenswrapper[5108]: E1212 14:25:37.626728 5108 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc8a5d9e9_9241_4559_bc61_8ed6f5a600dd.slice/crio-conmon-c55f2312448033d957fba5bad9dc555dc6ee12b75a4b1ef5fa83afbd58316dd9.scope\": RecentStats: unable to find data in memory cache]" Dec 12 14:25:38 crc kubenswrapper[5108]: I1212 14:25:38.608302 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8flcvls" event={"ID":"07f7b2af-30f8-49c2-895e-73628bf6158d","Type":"ContainerStarted","Data":"88cbffb801b7fd9a68ac21b043a18d0b45c253343a77cb35ee424c77839902ea"} Dec 12 14:25:38 crc kubenswrapper[5108]: I1212 14:25:38.615734 5108 generic.go:358] "Generic (PLEG): container finished" podID="403914e9-c5c4-4b82-bc5e-1eb93a75e19a" containerID="468ce9df2bb78646f19ad05083b65dcff211847df03ffadf7d7e33ca6108c2d5" exitCode=0 Dec 12 14:25:38 crc kubenswrapper[5108]: I1212 14:25:38.615827 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/8ee8c55189da7090d5ccec56827ddc8c72a44413647227f2ae0842e747wvzkc" event={"ID":"403914e9-c5c4-4b82-bc5e-1eb93a75e19a","Type":"ContainerDied","Data":"468ce9df2bb78646f19ad05083b65dcff211847df03ffadf7d7e33ca6108c2d5"} Dec 12 14:25:39 crc kubenswrapper[5108]: I1212 14:25:39.624257 5108 generic.go:358] "Generic (PLEG): container finished" podID="403914e9-c5c4-4b82-bc5e-1eb93a75e19a" containerID="f64d695a4e3e62fce3011ebaf2b7b38c6f99377cd8485ea16cbde42ee423a149" exitCode=0 Dec 12 14:25:39 crc kubenswrapper[5108]: I1212 14:25:39.624447 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/8ee8c55189da7090d5ccec56827ddc8c72a44413647227f2ae0842e747wvzkc" event={"ID":"403914e9-c5c4-4b82-bc5e-1eb93a75e19a","Type":"ContainerDied","Data":"f64d695a4e3e62fce3011ebaf2b7b38c6f99377cd8485ea16cbde42ee423a149"} Dec 12 14:25:39 crc kubenswrapper[5108]: I1212 14:25:39.626998 5108 generic.go:358] "Generic (PLEG): container finished" podID="07f7b2af-30f8-49c2-895e-73628bf6158d" containerID="88cbffb801b7fd9a68ac21b043a18d0b45c253343a77cb35ee424c77839902ea" exitCode=0 Dec 12 14:25:39 crc kubenswrapper[5108]: I1212 14:25:39.627163 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8flcvls" event={"ID":"07f7b2af-30f8-49c2-895e-73628bf6158d","Type":"ContainerDied","Data":"88cbffb801b7fd9a68ac21b043a18d0b45c253343a77cb35ee424c77839902ea"} Dec 12 14:25:39 crc kubenswrapper[5108]: I1212 14:25:39.629918 5108 generic.go:358] "Generic (PLEG): container finished" podID="c8a5d9e9-9241-4559-bc61-8ed6f5a600dd" containerID="42d7743dbc2b9e539d05b74b463a287cfeca96a3d470b0ba028f8dd582b71511" exitCode=0 Dec 12 14:25:39 crc kubenswrapper[5108]: I1212 14:25:39.629964 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/a37d2feed04fa147ea45c25cb64a74003bcaf6cda2941cce1e3a2ba788k9jfc" event={"ID":"c8a5d9e9-9241-4559-bc61-8ed6f5a600dd","Type":"ContainerDied","Data":"42d7743dbc2b9e539d05b74b463a287cfeca96a3d470b0ba028f8dd582b71511"} Dec 12 14:25:40 crc kubenswrapper[5108]: I1212 14:25:40.639284 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/a37d2feed04fa147ea45c25cb64a74003bcaf6cda2941cce1e3a2ba788k9jfc" event={"ID":"c8a5d9e9-9241-4559-bc61-8ed6f5a600dd","Type":"ContainerStarted","Data":"6c4e76e442c44e56e1d274423a841b8f0bc6a7a17c00b730f314b82f1a1b68c4"} Dec 12 14:25:40 crc kubenswrapper[5108]: I1212 14:25:40.641114 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8flcvls" event={"ID":"07f7b2af-30f8-49c2-895e-73628bf6158d","Type":"ContainerStarted","Data":"c83ec2b2db97037c7591fb7c92037596d517c987cb4b4d46ab99dc1040fa0b40"} Dec 12 14:25:40 crc kubenswrapper[5108]: I1212 14:25:40.665518 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/a37d2feed04fa147ea45c25cb64a74003bcaf6cda2941cce1e3a2ba788k9jfc" podStartSLOduration=5.521393143 podStartE2EDuration="6.665496908s" podCreationTimestamp="2025-12-12 14:25:34 +0000 UTC" firstStartedPulling="2025-12-12 14:25:37.605622962 +0000 UTC m=+890.513614121" lastFinishedPulling="2025-12-12 14:25:38.749726697 +0000 UTC m=+891.657717886" observedRunningTime="2025-12-12 14:25:40.660202445 +0000 UTC m=+893.568193644" watchObservedRunningTime="2025-12-12 14:25:40.665496908 +0000 UTC m=+893.573488077" Dec 12 14:25:40 crc kubenswrapper[5108]: I1212 14:25:40.680637 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8flcvls" podStartSLOduration=5.859389596 podStartE2EDuration="7.680614296s" podCreationTimestamp="2025-12-12 14:25:33 +0000 UTC" firstStartedPulling="2025-12-12 14:25:36.587275481 +0000 UTC m=+889.495266640" lastFinishedPulling="2025-12-12 14:25:38.408500181 +0000 UTC m=+891.316491340" observedRunningTime="2025-12-12 14:25:40.680603556 +0000 UTC m=+893.588594725" watchObservedRunningTime="2025-12-12 14:25:40.680614296 +0000 UTC m=+893.588605465" Dec 12 14:25:40 crc kubenswrapper[5108]: I1212 14:25:40.915738 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/8ee8c55189da7090d5ccec56827ddc8c72a44413647227f2ae0842e747wvzkc" Dec 12 14:25:41 crc kubenswrapper[5108]: I1212 14:25:41.027155 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/403914e9-c5c4-4b82-bc5e-1eb93a75e19a-bundle\") pod \"403914e9-c5c4-4b82-bc5e-1eb93a75e19a\" (UID: \"403914e9-c5c4-4b82-bc5e-1eb93a75e19a\") " Dec 12 14:25:41 crc kubenswrapper[5108]: I1212 14:25:41.027288 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-68bxz\" (UniqueName: \"kubernetes.io/projected/403914e9-c5c4-4b82-bc5e-1eb93a75e19a-kube-api-access-68bxz\") pod \"403914e9-c5c4-4b82-bc5e-1eb93a75e19a\" (UID: \"403914e9-c5c4-4b82-bc5e-1eb93a75e19a\") " Dec 12 14:25:41 crc kubenswrapper[5108]: I1212 14:25:41.027311 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/403914e9-c5c4-4b82-bc5e-1eb93a75e19a-util\") pod \"403914e9-c5c4-4b82-bc5e-1eb93a75e19a\" (UID: \"403914e9-c5c4-4b82-bc5e-1eb93a75e19a\") " Dec 12 14:25:41 crc kubenswrapper[5108]: I1212 14:25:41.027894 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/403914e9-c5c4-4b82-bc5e-1eb93a75e19a-bundle" (OuterVolumeSpecName: "bundle") pod "403914e9-c5c4-4b82-bc5e-1eb93a75e19a" (UID: "403914e9-c5c4-4b82-bc5e-1eb93a75e19a"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:25:41 crc kubenswrapper[5108]: I1212 14:25:41.034367 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/403914e9-c5c4-4b82-bc5e-1eb93a75e19a-kube-api-access-68bxz" (OuterVolumeSpecName: "kube-api-access-68bxz") pod "403914e9-c5c4-4b82-bc5e-1eb93a75e19a" (UID: "403914e9-c5c4-4b82-bc5e-1eb93a75e19a"). InnerVolumeSpecName "kube-api-access-68bxz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:25:41 crc kubenswrapper[5108]: I1212 14:25:41.036837 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/403914e9-c5c4-4b82-bc5e-1eb93a75e19a-util" (OuterVolumeSpecName: "util") pod "403914e9-c5c4-4b82-bc5e-1eb93a75e19a" (UID: "403914e9-c5c4-4b82-bc5e-1eb93a75e19a"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:25:41 crc kubenswrapper[5108]: I1212 14:25:41.128614 5108 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/403914e9-c5c4-4b82-bc5e-1eb93a75e19a-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 14:25:41 crc kubenswrapper[5108]: I1212 14:25:41.128864 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-68bxz\" (UniqueName: \"kubernetes.io/projected/403914e9-c5c4-4b82-bc5e-1eb93a75e19a-kube-api-access-68bxz\") on node \"crc\" DevicePath \"\"" Dec 12 14:25:41 crc kubenswrapper[5108]: I1212 14:25:41.128944 5108 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/403914e9-c5c4-4b82-bc5e-1eb93a75e19a-util\") on node \"crc\" DevicePath \"\"" Dec 12 14:25:41 crc kubenswrapper[5108]: I1212 14:25:41.649208 5108 generic.go:358] "Generic (PLEG): container finished" podID="c8a5d9e9-9241-4559-bc61-8ed6f5a600dd" containerID="6c4e76e442c44e56e1d274423a841b8f0bc6a7a17c00b730f314b82f1a1b68c4" exitCode=0 Dec 12 14:25:41 crc kubenswrapper[5108]: I1212 14:25:41.649335 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/a37d2feed04fa147ea45c25cb64a74003bcaf6cda2941cce1e3a2ba788k9jfc" event={"ID":"c8a5d9e9-9241-4559-bc61-8ed6f5a600dd","Type":"ContainerDied","Data":"6c4e76e442c44e56e1d274423a841b8f0bc6a7a17c00b730f314b82f1a1b68c4"} Dec 12 14:25:41 crc kubenswrapper[5108]: I1212 14:25:41.651152 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/8ee8c55189da7090d5ccec56827ddc8c72a44413647227f2ae0842e747wvzkc" Dec 12 14:25:41 crc kubenswrapper[5108]: I1212 14:25:41.651198 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/8ee8c55189da7090d5ccec56827ddc8c72a44413647227f2ae0842e747wvzkc" event={"ID":"403914e9-c5c4-4b82-bc5e-1eb93a75e19a","Type":"ContainerDied","Data":"83403153920dc1d186fe0bbc58168158a8bdddcf297bd59f3615e922d3f93a7e"} Dec 12 14:25:41 crc kubenswrapper[5108]: I1212 14:25:41.651232 5108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="83403153920dc1d186fe0bbc58168158a8bdddcf297bd59f3615e922d3f93a7e" Dec 12 14:25:42 crc kubenswrapper[5108]: I1212 14:25:42.659645 5108 generic.go:358] "Generic (PLEG): container finished" podID="07f7b2af-30f8-49c2-895e-73628bf6158d" containerID="c83ec2b2db97037c7591fb7c92037596d517c987cb4b4d46ab99dc1040fa0b40" exitCode=0 Dec 12 14:25:42 crc kubenswrapper[5108]: I1212 14:25:42.659738 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8flcvls" event={"ID":"07f7b2af-30f8-49c2-895e-73628bf6158d","Type":"ContainerDied","Data":"c83ec2b2db97037c7591fb7c92037596d517c987cb4b4d46ab99dc1040fa0b40"} Dec 12 14:25:42 crc kubenswrapper[5108]: I1212 14:25:42.935749 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/a37d2feed04fa147ea45c25cb64a74003bcaf6cda2941cce1e3a2ba788k9jfc" Dec 12 14:25:43 crc kubenswrapper[5108]: I1212 14:25:43.051361 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c8a5d9e9-9241-4559-bc61-8ed6f5a600dd-util\") pod \"c8a5d9e9-9241-4559-bc61-8ed6f5a600dd\" (UID: \"c8a5d9e9-9241-4559-bc61-8ed6f5a600dd\") " Dec 12 14:25:43 crc kubenswrapper[5108]: I1212 14:25:43.051501 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9zmbd\" (UniqueName: \"kubernetes.io/projected/c8a5d9e9-9241-4559-bc61-8ed6f5a600dd-kube-api-access-9zmbd\") pod \"c8a5d9e9-9241-4559-bc61-8ed6f5a600dd\" (UID: \"c8a5d9e9-9241-4559-bc61-8ed6f5a600dd\") " Dec 12 14:25:43 crc kubenswrapper[5108]: I1212 14:25:43.051546 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c8a5d9e9-9241-4559-bc61-8ed6f5a600dd-bundle\") pod \"c8a5d9e9-9241-4559-bc61-8ed6f5a600dd\" (UID: \"c8a5d9e9-9241-4559-bc61-8ed6f5a600dd\") " Dec 12 14:25:43 crc kubenswrapper[5108]: I1212 14:25:43.052211 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c8a5d9e9-9241-4559-bc61-8ed6f5a600dd-bundle" (OuterVolumeSpecName: "bundle") pod "c8a5d9e9-9241-4559-bc61-8ed6f5a600dd" (UID: "c8a5d9e9-9241-4559-bc61-8ed6f5a600dd"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:25:43 crc kubenswrapper[5108]: I1212 14:25:43.056056 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8a5d9e9-9241-4559-bc61-8ed6f5a600dd-kube-api-access-9zmbd" (OuterVolumeSpecName: "kube-api-access-9zmbd") pod "c8a5d9e9-9241-4559-bc61-8ed6f5a600dd" (UID: "c8a5d9e9-9241-4559-bc61-8ed6f5a600dd"). InnerVolumeSpecName "kube-api-access-9zmbd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:25:43 crc kubenswrapper[5108]: I1212 14:25:43.062748 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c8a5d9e9-9241-4559-bc61-8ed6f5a600dd-util" (OuterVolumeSpecName: "util") pod "c8a5d9e9-9241-4559-bc61-8ed6f5a600dd" (UID: "c8a5d9e9-9241-4559-bc61-8ed6f5a600dd"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:25:43 crc kubenswrapper[5108]: I1212 14:25:43.153610 5108 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c8a5d9e9-9241-4559-bc61-8ed6f5a600dd-util\") on node \"crc\" DevicePath \"\"" Dec 12 14:25:43 crc kubenswrapper[5108]: I1212 14:25:43.153656 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9zmbd\" (UniqueName: \"kubernetes.io/projected/c8a5d9e9-9241-4559-bc61-8ed6f5a600dd-kube-api-access-9zmbd\") on node \"crc\" DevicePath \"\"" Dec 12 14:25:43 crc kubenswrapper[5108]: I1212 14:25:43.153673 5108 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c8a5d9e9-9241-4559-bc61-8ed6f5a600dd-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 14:25:43 crc kubenswrapper[5108]: I1212 14:25:43.669493 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/a37d2feed04fa147ea45c25cb64a74003bcaf6cda2941cce1e3a2ba788k9jfc" Dec 12 14:25:43 crc kubenswrapper[5108]: I1212 14:25:43.669492 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/a37d2feed04fa147ea45c25cb64a74003bcaf6cda2941cce1e3a2ba788k9jfc" event={"ID":"c8a5d9e9-9241-4559-bc61-8ed6f5a600dd","Type":"ContainerDied","Data":"1e7a414d504a7c9a57b5ce44282491124d874863d213d8540f37626dbf674468"} Dec 12 14:25:43 crc kubenswrapper[5108]: I1212 14:25:43.669916 5108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1e7a414d504a7c9a57b5ce44282491124d874863d213d8540f37626dbf674468" Dec 12 14:25:43 crc kubenswrapper[5108]: I1212 14:25:43.954741 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8flcvls" Dec 12 14:25:44 crc kubenswrapper[5108]: I1212 14:25:44.065006 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/07f7b2af-30f8-49c2-895e-73628bf6158d-util\") pod \"07f7b2af-30f8-49c2-895e-73628bf6158d\" (UID: \"07f7b2af-30f8-49c2-895e-73628bf6158d\") " Dec 12 14:25:44 crc kubenswrapper[5108]: I1212 14:25:44.065104 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/07f7b2af-30f8-49c2-895e-73628bf6158d-bundle\") pod \"07f7b2af-30f8-49c2-895e-73628bf6158d\" (UID: \"07f7b2af-30f8-49c2-895e-73628bf6158d\") " Dec 12 14:25:44 crc kubenswrapper[5108]: I1212 14:25:44.065194 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zz26d\" (UniqueName: \"kubernetes.io/projected/07f7b2af-30f8-49c2-895e-73628bf6158d-kube-api-access-zz26d\") pod \"07f7b2af-30f8-49c2-895e-73628bf6158d\" (UID: \"07f7b2af-30f8-49c2-895e-73628bf6158d\") " Dec 12 14:25:44 crc kubenswrapper[5108]: I1212 14:25:44.066105 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07f7b2af-30f8-49c2-895e-73628bf6158d-bundle" (OuterVolumeSpecName: "bundle") pod "07f7b2af-30f8-49c2-895e-73628bf6158d" (UID: "07f7b2af-30f8-49c2-895e-73628bf6158d"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:25:44 crc kubenswrapper[5108]: I1212 14:25:44.075469 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07f7b2af-30f8-49c2-895e-73628bf6158d-kube-api-access-zz26d" (OuterVolumeSpecName: "kube-api-access-zz26d") pod "07f7b2af-30f8-49c2-895e-73628bf6158d" (UID: "07f7b2af-30f8-49c2-895e-73628bf6158d"). InnerVolumeSpecName "kube-api-access-zz26d". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:25:44 crc kubenswrapper[5108]: I1212 14:25:44.081484 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07f7b2af-30f8-49c2-895e-73628bf6158d-util" (OuterVolumeSpecName: "util") pod "07f7b2af-30f8-49c2-895e-73628bf6158d" (UID: "07f7b2af-30f8-49c2-895e-73628bf6158d"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:25:44 crc kubenswrapper[5108]: I1212 14:25:44.166555 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zz26d\" (UniqueName: \"kubernetes.io/projected/07f7b2af-30f8-49c2-895e-73628bf6158d-kube-api-access-zz26d\") on node \"crc\" DevicePath \"\"" Dec 12 14:25:44 crc kubenswrapper[5108]: I1212 14:25:44.166593 5108 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/07f7b2af-30f8-49c2-895e-73628bf6158d-util\") on node \"crc\" DevicePath \"\"" Dec 12 14:25:44 crc kubenswrapper[5108]: I1212 14:25:44.166603 5108 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/07f7b2af-30f8-49c2-895e-73628bf6158d-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 14:25:44 crc kubenswrapper[5108]: I1212 14:25:44.676872 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8flcvls" event={"ID":"07f7b2af-30f8-49c2-895e-73628bf6158d","Type":"ContainerDied","Data":"3bec00cbfefe325d6883192e505e5a33b2fd42bd7a2124b87361fe450390a6d5"} Dec 12 14:25:44 crc kubenswrapper[5108]: I1212 14:25:44.677255 5108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3bec00cbfefe325d6883192e505e5a33b2fd42bd7a2124b87361fe450390a6d5" Dec 12 14:25:44 crc kubenswrapper[5108]: I1212 14:25:44.676905 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8flcvls" Dec 12 14:25:46 crc kubenswrapper[5108]: I1212 14:25:46.406480 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-ccf9cd448-6qcq5"] Dec 12 14:25:46 crc kubenswrapper[5108]: I1212 14:25:46.407277 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c8a5d9e9-9241-4559-bc61-8ed6f5a600dd" containerName="extract" Dec 12 14:25:46 crc kubenswrapper[5108]: I1212 14:25:46.407296 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8a5d9e9-9241-4559-bc61-8ed6f5a600dd" containerName="extract" Dec 12 14:25:46 crc kubenswrapper[5108]: I1212 14:25:46.407323 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c8a5d9e9-9241-4559-bc61-8ed6f5a600dd" containerName="pull" Dec 12 14:25:46 crc kubenswrapper[5108]: I1212 14:25:46.407330 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8a5d9e9-9241-4559-bc61-8ed6f5a600dd" containerName="pull" Dec 12 14:25:46 crc kubenswrapper[5108]: I1212 14:25:46.407344 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="403914e9-c5c4-4b82-bc5e-1eb93a75e19a" containerName="util" Dec 12 14:25:46 crc kubenswrapper[5108]: I1212 14:25:46.407352 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="403914e9-c5c4-4b82-bc5e-1eb93a75e19a" containerName="util" Dec 12 14:25:46 crc kubenswrapper[5108]: I1212 14:25:46.407363 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="07f7b2af-30f8-49c2-895e-73628bf6158d" containerName="util" Dec 12 14:25:46 crc kubenswrapper[5108]: I1212 14:25:46.407370 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="07f7b2af-30f8-49c2-895e-73628bf6158d" containerName="util" Dec 12 14:25:46 crc kubenswrapper[5108]: I1212 14:25:46.407393 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c8a5d9e9-9241-4559-bc61-8ed6f5a600dd" containerName="util" Dec 12 14:25:46 crc kubenswrapper[5108]: I1212 14:25:46.407400 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8a5d9e9-9241-4559-bc61-8ed6f5a600dd" containerName="util" Dec 12 14:25:46 crc kubenswrapper[5108]: I1212 14:25:46.407411 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="403914e9-c5c4-4b82-bc5e-1eb93a75e19a" containerName="pull" Dec 12 14:25:46 crc kubenswrapper[5108]: I1212 14:25:46.407417 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="403914e9-c5c4-4b82-bc5e-1eb93a75e19a" containerName="pull" Dec 12 14:25:46 crc kubenswrapper[5108]: I1212 14:25:46.407427 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="07f7b2af-30f8-49c2-895e-73628bf6158d" containerName="pull" Dec 12 14:25:46 crc kubenswrapper[5108]: I1212 14:25:46.407435 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="07f7b2af-30f8-49c2-895e-73628bf6158d" containerName="pull" Dec 12 14:25:46 crc kubenswrapper[5108]: I1212 14:25:46.407442 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="403914e9-c5c4-4b82-bc5e-1eb93a75e19a" containerName="extract" Dec 12 14:25:46 crc kubenswrapper[5108]: I1212 14:25:46.407449 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="403914e9-c5c4-4b82-bc5e-1eb93a75e19a" containerName="extract" Dec 12 14:25:46 crc kubenswrapper[5108]: I1212 14:25:46.407458 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="07f7b2af-30f8-49c2-895e-73628bf6158d" containerName="extract" Dec 12 14:25:46 crc kubenswrapper[5108]: I1212 14:25:46.407465 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="07f7b2af-30f8-49c2-895e-73628bf6158d" containerName="extract" Dec 12 14:25:46 crc kubenswrapper[5108]: I1212 14:25:46.407597 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="07f7b2af-30f8-49c2-895e-73628bf6158d" containerName="extract" Dec 12 14:25:46 crc kubenswrapper[5108]: I1212 14:25:46.407614 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="403914e9-c5c4-4b82-bc5e-1eb93a75e19a" containerName="extract" Dec 12 14:25:46 crc kubenswrapper[5108]: I1212 14:25:46.407623 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="c8a5d9e9-9241-4559-bc61-8ed6f5a600dd" containerName="extract" Dec 12 14:25:46 crc kubenswrapper[5108]: I1212 14:25:46.839824 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-ccf9cd448-6qcq5"] Dec 12 14:25:46 crc kubenswrapper[5108]: I1212 14:25:46.840985 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-ccf9cd448-6qcq5" Dec 12 14:25:46 crc kubenswrapper[5108]: I1212 14:25:46.848592 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-dockercfg-bpwqt\"" Dec 12 14:25:46 crc kubenswrapper[5108]: I1212 14:25:46.893504 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/fc6734a9-9925-4e27-aa50-399dd6b05f37-runner\") pod \"service-telemetry-operator-ccf9cd448-6qcq5\" (UID: \"fc6734a9-9925-4e27-aa50-399dd6b05f37\") " pod="service-telemetry/service-telemetry-operator-ccf9cd448-6qcq5" Dec 12 14:25:46 crc kubenswrapper[5108]: I1212 14:25:46.894017 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brmqh\" (UniqueName: \"kubernetes.io/projected/fc6734a9-9925-4e27-aa50-399dd6b05f37-kube-api-access-brmqh\") pod \"service-telemetry-operator-ccf9cd448-6qcq5\" (UID: \"fc6734a9-9925-4e27-aa50-399dd6b05f37\") " pod="service-telemetry/service-telemetry-operator-ccf9cd448-6qcq5" Dec 12 14:25:46 crc kubenswrapper[5108]: I1212 14:25:46.995804 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/fc6734a9-9925-4e27-aa50-399dd6b05f37-runner\") pod \"service-telemetry-operator-ccf9cd448-6qcq5\" (UID: \"fc6734a9-9925-4e27-aa50-399dd6b05f37\") " pod="service-telemetry/service-telemetry-operator-ccf9cd448-6qcq5" Dec 12 14:25:46 crc kubenswrapper[5108]: I1212 14:25:46.995915 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-brmqh\" (UniqueName: \"kubernetes.io/projected/fc6734a9-9925-4e27-aa50-399dd6b05f37-kube-api-access-brmqh\") pod \"service-telemetry-operator-ccf9cd448-6qcq5\" (UID: \"fc6734a9-9925-4e27-aa50-399dd6b05f37\") " pod="service-telemetry/service-telemetry-operator-ccf9cd448-6qcq5" Dec 12 14:25:46 crc kubenswrapper[5108]: I1212 14:25:46.996427 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/fc6734a9-9925-4e27-aa50-399dd6b05f37-runner\") pod \"service-telemetry-operator-ccf9cd448-6qcq5\" (UID: \"fc6734a9-9925-4e27-aa50-399dd6b05f37\") " pod="service-telemetry/service-telemetry-operator-ccf9cd448-6qcq5" Dec 12 14:25:47 crc kubenswrapper[5108]: I1212 14:25:47.012580 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-brmqh\" (UniqueName: \"kubernetes.io/projected/fc6734a9-9925-4e27-aa50-399dd6b05f37-kube-api-access-brmqh\") pod \"service-telemetry-operator-ccf9cd448-6qcq5\" (UID: \"fc6734a9-9925-4e27-aa50-399dd6b05f37\") " pod="service-telemetry/service-telemetry-operator-ccf9cd448-6qcq5" Dec 12 14:25:47 crc kubenswrapper[5108]: I1212 14:25:47.160651 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-ccf9cd448-6qcq5" Dec 12 14:25:47 crc kubenswrapper[5108]: I1212 14:25:47.591787 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-ccf9cd448-6qcq5"] Dec 12 14:25:47 crc kubenswrapper[5108]: W1212 14:25:47.600302 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfc6734a9_9925_4e27_aa50_399dd6b05f37.slice/crio-4939e7ed21eba17393b2f37bb5ab74da2903078b5437e7f058f0fd5975adec3b WatchSource:0}: Error finding container 4939e7ed21eba17393b2f37bb5ab74da2903078b5437e7f058f0fd5975adec3b: Status 404 returned error can't find the container with id 4939e7ed21eba17393b2f37bb5ab74da2903078b5437e7f058f0fd5975adec3b Dec 12 14:25:47 crc kubenswrapper[5108]: I1212 14:25:47.695314 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-ccf9cd448-6qcq5" event={"ID":"fc6734a9-9925-4e27-aa50-399dd6b05f37","Type":"ContainerStarted","Data":"4939e7ed21eba17393b2f37bb5ab74da2903078b5437e7f058f0fd5975adec3b"} Dec 12 14:25:47 crc kubenswrapper[5108]: I1212 14:25:47.720206 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/interconnect-operator-78b9bd8798-qnlhq"] Dec 12 14:25:47 crc kubenswrapper[5108]: I1212 14:25:47.887488 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ztpws_1e8c3045-7200-4b39-9531-5ce86ab0b5b5/kube-multus/0.log" Dec 12 14:25:47 crc kubenswrapper[5108]: I1212 14:25:47.892920 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ztpws_1e8c3045-7200-4b39-9531-5ce86ab0b5b5/kube-multus/0.log" Dec 12 14:25:47 crc kubenswrapper[5108]: I1212 14:25:47.903869 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Dec 12 14:25:47 crc kubenswrapper[5108]: I1212 14:25:47.904782 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Dec 12 14:25:49 crc kubenswrapper[5108]: I1212 14:25:49.514328 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/interconnect-operator-78b9bd8798-qnlhq" Dec 12 14:25:49 crc kubenswrapper[5108]: I1212 14:25:49.519113 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"interconnect-operator-dockercfg-9xql8\"" Dec 12 14:25:49 crc kubenswrapper[5108]: I1212 14:25:49.522625 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/interconnect-operator-78b9bd8798-qnlhq"] Dec 12 14:25:49 crc kubenswrapper[5108]: I1212 14:25:49.522666 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/smart-gateway-operator-5766884c8f-hdsbc"] Dec 12 14:25:49 crc kubenswrapper[5108]: I1212 14:25:49.534837 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-5766884c8f-hdsbc"] Dec 12 14:25:49 crc kubenswrapper[5108]: I1212 14:25:49.535202 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-5766884c8f-hdsbc" Dec 12 14:25:49 crc kubenswrapper[5108]: I1212 14:25:49.537786 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"smart-gateway-operator-dockercfg-jddsz\"" Dec 12 14:25:49 crc kubenswrapper[5108]: I1212 14:25:49.539999 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4cvr\" (UniqueName: \"kubernetes.io/projected/6744fabe-b8c0-4663-89f8-99bfe882839e-kube-api-access-t4cvr\") pod \"interconnect-operator-78b9bd8798-qnlhq\" (UID: \"6744fabe-b8c0-4663-89f8-99bfe882839e\") " pod="service-telemetry/interconnect-operator-78b9bd8798-qnlhq" Dec 12 14:25:49 crc kubenswrapper[5108]: I1212 14:25:49.640950 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-t4cvr\" (UniqueName: \"kubernetes.io/projected/6744fabe-b8c0-4663-89f8-99bfe882839e-kube-api-access-t4cvr\") pod \"interconnect-operator-78b9bd8798-qnlhq\" (UID: \"6744fabe-b8c0-4663-89f8-99bfe882839e\") " pod="service-telemetry/interconnect-operator-78b9bd8798-qnlhq" Dec 12 14:25:49 crc kubenswrapper[5108]: I1212 14:25:49.641702 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pvq4\" (UniqueName: \"kubernetes.io/projected/6d2c398c-2f9a-4763-bb62-ab2dc4d63620-kube-api-access-4pvq4\") pod \"smart-gateway-operator-5766884c8f-hdsbc\" (UID: \"6d2c398c-2f9a-4763-bb62-ab2dc4d63620\") " pod="service-telemetry/smart-gateway-operator-5766884c8f-hdsbc" Dec 12 14:25:49 crc kubenswrapper[5108]: I1212 14:25:49.641862 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/6d2c398c-2f9a-4763-bb62-ab2dc4d63620-runner\") pod \"smart-gateway-operator-5766884c8f-hdsbc\" (UID: \"6d2c398c-2f9a-4763-bb62-ab2dc4d63620\") " pod="service-telemetry/smart-gateway-operator-5766884c8f-hdsbc" Dec 12 14:25:49 crc kubenswrapper[5108]: I1212 14:25:49.661247 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4cvr\" (UniqueName: \"kubernetes.io/projected/6744fabe-b8c0-4663-89f8-99bfe882839e-kube-api-access-t4cvr\") pod \"interconnect-operator-78b9bd8798-qnlhq\" (UID: \"6744fabe-b8c0-4663-89f8-99bfe882839e\") " pod="service-telemetry/interconnect-operator-78b9bd8798-qnlhq" Dec 12 14:25:49 crc kubenswrapper[5108]: I1212 14:25:49.743420 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4pvq4\" (UniqueName: \"kubernetes.io/projected/6d2c398c-2f9a-4763-bb62-ab2dc4d63620-kube-api-access-4pvq4\") pod \"smart-gateway-operator-5766884c8f-hdsbc\" (UID: \"6d2c398c-2f9a-4763-bb62-ab2dc4d63620\") " pod="service-telemetry/smart-gateway-operator-5766884c8f-hdsbc" Dec 12 14:25:49 crc kubenswrapper[5108]: I1212 14:25:49.743497 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/6d2c398c-2f9a-4763-bb62-ab2dc4d63620-runner\") pod \"smart-gateway-operator-5766884c8f-hdsbc\" (UID: \"6d2c398c-2f9a-4763-bb62-ab2dc4d63620\") " pod="service-telemetry/smart-gateway-operator-5766884c8f-hdsbc" Dec 12 14:25:49 crc kubenswrapper[5108]: I1212 14:25:49.744067 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/6d2c398c-2f9a-4763-bb62-ab2dc4d63620-runner\") pod \"smart-gateway-operator-5766884c8f-hdsbc\" (UID: \"6d2c398c-2f9a-4763-bb62-ab2dc4d63620\") " pod="service-telemetry/smart-gateway-operator-5766884c8f-hdsbc" Dec 12 14:25:49 crc kubenswrapper[5108]: I1212 14:25:49.767877 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4pvq4\" (UniqueName: \"kubernetes.io/projected/6d2c398c-2f9a-4763-bb62-ab2dc4d63620-kube-api-access-4pvq4\") pod \"smart-gateway-operator-5766884c8f-hdsbc\" (UID: \"6d2c398c-2f9a-4763-bb62-ab2dc4d63620\") " pod="service-telemetry/smart-gateway-operator-5766884c8f-hdsbc" Dec 12 14:25:49 crc kubenswrapper[5108]: I1212 14:25:49.835232 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/interconnect-operator-78b9bd8798-qnlhq" Dec 12 14:25:49 crc kubenswrapper[5108]: I1212 14:25:49.854815 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-5766884c8f-hdsbc" Dec 12 14:25:49 crc kubenswrapper[5108]: I1212 14:25:49.987589 5108 patch_prober.go:28] interesting pod/machine-config-daemon-w294k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 14:25:49 crc kubenswrapper[5108]: I1212 14:25:49.987679 5108 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w294k" podUID="fcb30c12-8b29-461d-ab3e-a76577b664d6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 14:25:50 crc kubenswrapper[5108]: I1212 14:25:50.109762 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-5766884c8f-hdsbc"] Dec 12 14:25:50 crc kubenswrapper[5108]: W1212 14:25:50.132623 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6d2c398c_2f9a_4763_bb62_ab2dc4d63620.slice/crio-96c6f6f8422afcd183e4c8ab693a753603586d01c33c164f5d784308f1efc198 WatchSource:0}: Error finding container 96c6f6f8422afcd183e4c8ab693a753603586d01c33c164f5d784308f1efc198: Status 404 returned error can't find the container with id 96c6f6f8422afcd183e4c8ab693a753603586d01c33c164f5d784308f1efc198 Dec 12 14:25:50 crc kubenswrapper[5108]: I1212 14:25:50.362136 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/interconnect-operator-78b9bd8798-qnlhq"] Dec 12 14:25:50 crc kubenswrapper[5108]: I1212 14:25:50.732784 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-5766884c8f-hdsbc" event={"ID":"6d2c398c-2f9a-4763-bb62-ab2dc4d63620","Type":"ContainerStarted","Data":"96c6f6f8422afcd183e4c8ab693a753603586d01c33c164f5d784308f1efc198"} Dec 12 14:25:50 crc kubenswrapper[5108]: I1212 14:25:50.742773 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/interconnect-operator-78b9bd8798-qnlhq" event={"ID":"6744fabe-b8c0-4663-89f8-99bfe882839e","Type":"ContainerStarted","Data":"b881b1dfff3754f86be97f346475c65154f999b98746ee952de7d6f3c86152d5"} Dec 12 14:26:19 crc kubenswrapper[5108]: I1212 14:26:19.987309 5108 patch_prober.go:28] interesting pod/machine-config-daemon-w294k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 14:26:19 crc kubenswrapper[5108]: I1212 14:26:19.988345 5108 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w294k" podUID="fcb30c12-8b29-461d-ab3e-a76577b664d6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 14:26:19 crc kubenswrapper[5108]: I1212 14:26:19.988425 5108 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-w294k" Dec 12 14:26:19 crc kubenswrapper[5108]: I1212 14:26:19.989465 5108 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4ab7ac5ee0d1edb7108d7f5ec4e957c0f7674bd3372b098711a9332769c2a4ec"} pod="openshift-machine-config-operator/machine-config-daemon-w294k" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 12 14:26:19 crc kubenswrapper[5108]: I1212 14:26:19.989600 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-w294k" podUID="fcb30c12-8b29-461d-ab3e-a76577b664d6" containerName="machine-config-daemon" containerID="cri-o://4ab7ac5ee0d1edb7108d7f5ec4e957c0f7674bd3372b098711a9332769c2a4ec" gracePeriod=600 Dec 12 14:26:24 crc kubenswrapper[5108]: I1212 14:26:24.152863 5108 generic.go:358] "Generic (PLEG): container finished" podID="fcb30c12-8b29-461d-ab3e-a76577b664d6" containerID="4ab7ac5ee0d1edb7108d7f5ec4e957c0f7674bd3372b098711a9332769c2a4ec" exitCode=0 Dec 12 14:26:24 crc kubenswrapper[5108]: I1212 14:26:24.153015 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w294k" event={"ID":"fcb30c12-8b29-461d-ab3e-a76577b664d6","Type":"ContainerDied","Data":"4ab7ac5ee0d1edb7108d7f5ec4e957c0f7674bd3372b098711a9332769c2a4ec"} Dec 12 14:26:24 crc kubenswrapper[5108]: I1212 14:26:24.153271 5108 scope.go:117] "RemoveContainer" containerID="f71b57f29e2ff270e6b601b2c583e9205b2e1619208bf2a39737e6a39c51a2f1" Dec 12 14:26:34 crc kubenswrapper[5108]: I1212 14:26:34.266125 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-5766884c8f-hdsbc" event={"ID":"6d2c398c-2f9a-4763-bb62-ab2dc4d63620","Type":"ContainerStarted","Data":"6bd8636a6b37d21cbe5f70dbff05a1e43bbbaec08b8f196c260cbad1bfbf7fa5"} Dec 12 14:26:34 crc kubenswrapper[5108]: I1212 14:26:34.269452 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/interconnect-operator-78b9bd8798-qnlhq" event={"ID":"6744fabe-b8c0-4663-89f8-99bfe882839e","Type":"ContainerStarted","Data":"931832693fdcc35ed86de341e0e6fe600ba4a8824696bf11b3930e0767192a5e"} Dec 12 14:26:34 crc kubenswrapper[5108]: I1212 14:26:34.272063 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w294k" event={"ID":"fcb30c12-8b29-461d-ab3e-a76577b664d6","Type":"ContainerStarted","Data":"0f19199a3d9c3a6c659bccb9623a347d927104d49964c4e1d410c151cedc6fa9"} Dec 12 14:26:34 crc kubenswrapper[5108]: I1212 14:26:34.288155 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/smart-gateway-operator-5766884c8f-hdsbc" podStartSLOduration=1.795856627 podStartE2EDuration="45.28814149s" podCreationTimestamp="2025-12-12 14:25:49 +0000 UTC" firstStartedPulling="2025-12-12 14:25:50.150367343 +0000 UTC m=+903.058358502" lastFinishedPulling="2025-12-12 14:26:33.642652206 +0000 UTC m=+946.550643365" observedRunningTime="2025-12-12 14:26:34.286702301 +0000 UTC m=+947.194693470" watchObservedRunningTime="2025-12-12 14:26:34.28814149 +0000 UTC m=+947.196132649" Dec 12 14:26:34 crc kubenswrapper[5108]: I1212 14:26:34.568880 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/interconnect-operator-78b9bd8798-qnlhq" podStartSLOduration=21.99624768 podStartE2EDuration="47.568855343s" podCreationTimestamp="2025-12-12 14:25:47 +0000 UTC" firstStartedPulling="2025-12-12 14:25:50.374779467 +0000 UTC m=+903.282770626" lastFinishedPulling="2025-12-12 14:26:15.94738713 +0000 UTC m=+928.855378289" observedRunningTime="2025-12-12 14:26:34.312571289 +0000 UTC m=+947.220562448" watchObservedRunningTime="2025-12-12 14:26:34.568855343 +0000 UTC m=+947.476846522" Dec 12 14:26:35 crc kubenswrapper[5108]: I1212 14:26:35.280466 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-ccf9cd448-6qcq5" event={"ID":"fc6734a9-9925-4e27-aa50-399dd6b05f37","Type":"ContainerStarted","Data":"1b68af040089507ef30a6ca821da5026054365e97ad964e47ef05bc0e6c80f98"} Dec 12 14:26:35 crc kubenswrapper[5108]: I1212 14:26:35.302676 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/service-telemetry-operator-ccf9cd448-6qcq5" podStartSLOduration=2.790402345 podStartE2EDuration="49.302653789s" podCreationTimestamp="2025-12-12 14:25:46 +0000 UTC" firstStartedPulling="2025-12-12 14:25:47.607960925 +0000 UTC m=+900.515952084" lastFinishedPulling="2025-12-12 14:26:34.120212369 +0000 UTC m=+947.028203528" observedRunningTime="2025-12-12 14:26:35.29712143 +0000 UTC m=+948.205112619" watchObservedRunningTime="2025-12-12 14:26:35.302653789 +0000 UTC m=+948.210644958" Dec 12 14:27:00 crc kubenswrapper[5108]: I1212 14:27:00.731729 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-wlh9t"] Dec 12 14:27:01 crc kubenswrapper[5108]: I1212 14:27:01.386621 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-wlh9t"] Dec 12 14:27:01 crc kubenswrapper[5108]: I1212 14:27:01.386805 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-wlh9t" Dec 12 14:27:01 crc kubenswrapper[5108]: I1212 14:27:01.389502 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-interconnect-sasl-config\"" Dec 12 14:27:01 crc kubenswrapper[5108]: I1212 14:27:01.389676 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-inter-router-ca\"" Dec 12 14:27:01 crc kubenswrapper[5108]: I1212 14:27:01.389830 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-openstack-credentials\"" Dec 12 14:27:01 crc kubenswrapper[5108]: I1212 14:27:01.389867 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-users\"" Dec 12 14:27:01 crc kubenswrapper[5108]: I1212 14:27:01.390171 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-openstack-ca\"" Dec 12 14:27:01 crc kubenswrapper[5108]: I1212 14:27:01.390355 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-dockercfg-h6lvx\"" Dec 12 14:27:01 crc kubenswrapper[5108]: I1212 14:27:01.390454 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-inter-router-credentials\"" Dec 12 14:27:01 crc kubenswrapper[5108]: I1212 14:27:01.413897 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/a5304ef7-fb2d-463b-8f63-b765183d5e64-sasl-users\") pod \"default-interconnect-55bf8d5cb-wlh9t\" (UID: \"a5304ef7-fb2d-463b-8f63-b765183d5e64\") " pod="service-telemetry/default-interconnect-55bf8d5cb-wlh9t" Dec 12 14:27:01 crc kubenswrapper[5108]: I1212 14:27:01.413997 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlzp7\" (UniqueName: \"kubernetes.io/projected/a5304ef7-fb2d-463b-8f63-b765183d5e64-kube-api-access-qlzp7\") pod \"default-interconnect-55bf8d5cb-wlh9t\" (UID: \"a5304ef7-fb2d-463b-8f63-b765183d5e64\") " pod="service-telemetry/default-interconnect-55bf8d5cb-wlh9t" Dec 12 14:27:01 crc kubenswrapper[5108]: I1212 14:27:01.414047 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/a5304ef7-fb2d-463b-8f63-b765183d5e64-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-wlh9t\" (UID: \"a5304ef7-fb2d-463b-8f63-b765183d5e64\") " pod="service-telemetry/default-interconnect-55bf8d5cb-wlh9t" Dec 12 14:27:01 crc kubenswrapper[5108]: I1212 14:27:01.414104 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/a5304ef7-fb2d-463b-8f63-b765183d5e64-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-wlh9t\" (UID: \"a5304ef7-fb2d-463b-8f63-b765183d5e64\") " pod="service-telemetry/default-interconnect-55bf8d5cb-wlh9t" Dec 12 14:27:01 crc kubenswrapper[5108]: I1212 14:27:01.414181 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/a5304ef7-fb2d-463b-8f63-b765183d5e64-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-wlh9t\" (UID: \"a5304ef7-fb2d-463b-8f63-b765183d5e64\") " pod="service-telemetry/default-interconnect-55bf8d5cb-wlh9t" Dec 12 14:27:01 crc kubenswrapper[5108]: I1212 14:27:01.414226 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/a5304ef7-fb2d-463b-8f63-b765183d5e64-sasl-config\") pod \"default-interconnect-55bf8d5cb-wlh9t\" (UID: \"a5304ef7-fb2d-463b-8f63-b765183d5e64\") " pod="service-telemetry/default-interconnect-55bf8d5cb-wlh9t" Dec 12 14:27:01 crc kubenswrapper[5108]: I1212 14:27:01.414293 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/a5304ef7-fb2d-463b-8f63-b765183d5e64-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-wlh9t\" (UID: \"a5304ef7-fb2d-463b-8f63-b765183d5e64\") " pod="service-telemetry/default-interconnect-55bf8d5cb-wlh9t" Dec 12 14:27:01 crc kubenswrapper[5108]: I1212 14:27:01.515156 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/a5304ef7-fb2d-463b-8f63-b765183d5e64-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-wlh9t\" (UID: \"a5304ef7-fb2d-463b-8f63-b765183d5e64\") " pod="service-telemetry/default-interconnect-55bf8d5cb-wlh9t" Dec 12 14:27:01 crc kubenswrapper[5108]: I1212 14:27:01.515453 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/a5304ef7-fb2d-463b-8f63-b765183d5e64-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-wlh9t\" (UID: \"a5304ef7-fb2d-463b-8f63-b765183d5e64\") " pod="service-telemetry/default-interconnect-55bf8d5cb-wlh9t" Dec 12 14:27:01 crc kubenswrapper[5108]: I1212 14:27:01.515545 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/a5304ef7-fb2d-463b-8f63-b765183d5e64-sasl-config\") pod \"default-interconnect-55bf8d5cb-wlh9t\" (UID: \"a5304ef7-fb2d-463b-8f63-b765183d5e64\") " pod="service-telemetry/default-interconnect-55bf8d5cb-wlh9t" Dec 12 14:27:01 crc kubenswrapper[5108]: I1212 14:27:01.515682 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/a5304ef7-fb2d-463b-8f63-b765183d5e64-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-wlh9t\" (UID: \"a5304ef7-fb2d-463b-8f63-b765183d5e64\") " pod="service-telemetry/default-interconnect-55bf8d5cb-wlh9t" Dec 12 14:27:01 crc kubenswrapper[5108]: I1212 14:27:01.515823 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/a5304ef7-fb2d-463b-8f63-b765183d5e64-sasl-users\") pod \"default-interconnect-55bf8d5cb-wlh9t\" (UID: \"a5304ef7-fb2d-463b-8f63-b765183d5e64\") " pod="service-telemetry/default-interconnect-55bf8d5cb-wlh9t" Dec 12 14:27:01 crc kubenswrapper[5108]: I1212 14:27:01.515943 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qlzp7\" (UniqueName: \"kubernetes.io/projected/a5304ef7-fb2d-463b-8f63-b765183d5e64-kube-api-access-qlzp7\") pod \"default-interconnect-55bf8d5cb-wlh9t\" (UID: \"a5304ef7-fb2d-463b-8f63-b765183d5e64\") " pod="service-telemetry/default-interconnect-55bf8d5cb-wlh9t" Dec 12 14:27:01 crc kubenswrapper[5108]: I1212 14:27:01.516040 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/a5304ef7-fb2d-463b-8f63-b765183d5e64-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-wlh9t\" (UID: \"a5304ef7-fb2d-463b-8f63-b765183d5e64\") " pod="service-telemetry/default-interconnect-55bf8d5cb-wlh9t" Dec 12 14:27:01 crc kubenswrapper[5108]: I1212 14:27:01.516585 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/a5304ef7-fb2d-463b-8f63-b765183d5e64-sasl-config\") pod \"default-interconnect-55bf8d5cb-wlh9t\" (UID: \"a5304ef7-fb2d-463b-8f63-b765183d5e64\") " pod="service-telemetry/default-interconnect-55bf8d5cb-wlh9t" Dec 12 14:27:01 crc kubenswrapper[5108]: I1212 14:27:01.521793 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/a5304ef7-fb2d-463b-8f63-b765183d5e64-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-wlh9t\" (UID: \"a5304ef7-fb2d-463b-8f63-b765183d5e64\") " pod="service-telemetry/default-interconnect-55bf8d5cb-wlh9t" Dec 12 14:27:01 crc kubenswrapper[5108]: I1212 14:27:01.522007 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/a5304ef7-fb2d-463b-8f63-b765183d5e64-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-wlh9t\" (UID: \"a5304ef7-fb2d-463b-8f63-b765183d5e64\") " pod="service-telemetry/default-interconnect-55bf8d5cb-wlh9t" Dec 12 14:27:01 crc kubenswrapper[5108]: I1212 14:27:01.522236 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/a5304ef7-fb2d-463b-8f63-b765183d5e64-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-wlh9t\" (UID: \"a5304ef7-fb2d-463b-8f63-b765183d5e64\") " pod="service-telemetry/default-interconnect-55bf8d5cb-wlh9t" Dec 12 14:27:01 crc kubenswrapper[5108]: I1212 14:27:01.529526 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/a5304ef7-fb2d-463b-8f63-b765183d5e64-sasl-users\") pod \"default-interconnect-55bf8d5cb-wlh9t\" (UID: \"a5304ef7-fb2d-463b-8f63-b765183d5e64\") " pod="service-telemetry/default-interconnect-55bf8d5cb-wlh9t" Dec 12 14:27:01 crc kubenswrapper[5108]: I1212 14:27:01.540415 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qlzp7\" (UniqueName: \"kubernetes.io/projected/a5304ef7-fb2d-463b-8f63-b765183d5e64-kube-api-access-qlzp7\") pod \"default-interconnect-55bf8d5cb-wlh9t\" (UID: \"a5304ef7-fb2d-463b-8f63-b765183d5e64\") " pod="service-telemetry/default-interconnect-55bf8d5cb-wlh9t" Dec 12 14:27:01 crc kubenswrapper[5108]: I1212 14:27:01.550904 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/a5304ef7-fb2d-463b-8f63-b765183d5e64-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-wlh9t\" (UID: \"a5304ef7-fb2d-463b-8f63-b765183d5e64\") " pod="service-telemetry/default-interconnect-55bf8d5cb-wlh9t" Dec 12 14:27:01 crc kubenswrapper[5108]: I1212 14:27:01.703062 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-wlh9t" Dec 12 14:27:01 crc kubenswrapper[5108]: I1212 14:27:01.905529 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-wlh9t"] Dec 12 14:27:01 crc kubenswrapper[5108]: I1212 14:27:01.915621 5108 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 12 14:27:02 crc kubenswrapper[5108]: I1212 14:27:02.468406 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-wlh9t" event={"ID":"a5304ef7-fb2d-463b-8f63-b765183d5e64","Type":"ContainerStarted","Data":"10010c5bbea5fc2920edc8bfe9a6f36c7ccd622aa589070875a661f7f14c5f6b"} Dec 12 14:27:07 crc kubenswrapper[5108]: I1212 14:27:07.504965 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-wlh9t" event={"ID":"a5304ef7-fb2d-463b-8f63-b765183d5e64","Type":"ContainerStarted","Data":"79a578d9c071bea253a1aaae51da1e0e557dcc901fd0374acde6fcd99e6724de"} Dec 12 14:27:07 crc kubenswrapper[5108]: I1212 14:27:07.524565 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-interconnect-55bf8d5cb-wlh9t" podStartSLOduration=2.348394615 podStartE2EDuration="7.524546353s" podCreationTimestamp="2025-12-12 14:27:00 +0000 UTC" firstStartedPulling="2025-12-12 14:27:01.915816141 +0000 UTC m=+974.823807300" lastFinishedPulling="2025-12-12 14:27:07.091967879 +0000 UTC m=+979.999959038" observedRunningTime="2025-12-12 14:27:07.522366114 +0000 UTC m=+980.430357273" watchObservedRunningTime="2025-12-12 14:27:07.524546353 +0000 UTC m=+980.432537502" Dec 12 14:27:13 crc kubenswrapper[5108]: I1212 14:27:13.050602 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/prometheus-default-0"] Dec 12 14:27:13 crc kubenswrapper[5108]: I1212 14:27:13.118403 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-default-0"] Dec 12 14:27:13 crc kubenswrapper[5108]: I1212 14:27:13.118563 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-default-0" Dec 12 14:27:13 crc kubenswrapper[5108]: I1212 14:27:13.124642 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"prometheus-default-web-config\"" Dec 12 14:27:13 crc kubenswrapper[5108]: I1212 14:27:13.125202 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-session-secret\"" Dec 12 14:27:13 crc kubenswrapper[5108]: I1212 14:27:13.125225 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"prometheus-default-tls-assets-0\"" Dec 12 14:27:13 crc kubenswrapper[5108]: I1212 14:27:13.125370 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"serving-certs-ca-bundle\"" Dec 12 14:27:13 crc kubenswrapper[5108]: I1212 14:27:13.125659 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"prometheus-default-rulefiles-0\"" Dec 12 14:27:13 crc kubenswrapper[5108]: I1212 14:27:13.125974 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"prometheus-default\"" Dec 12 14:27:13 crc kubenswrapper[5108]: I1212 14:27:13.126523 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"prometheus-stf-dockercfg-b46vr\"" Dec 12 14:27:13 crc kubenswrapper[5108]: I1212 14:27:13.126858 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-prometheus-proxy-tls\"" Dec 12 14:27:13 crc kubenswrapper[5108]: I1212 14:27:13.287212 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/701c19e2-ba3f-4570-9d96-5399d9cb9415-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"701c19e2-ba3f-4570-9d96-5399d9cb9415\") " pod="service-telemetry/prometheus-default-0" Dec 12 14:27:13 crc kubenswrapper[5108]: I1212 14:27:13.287275 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/701c19e2-ba3f-4570-9d96-5399d9cb9415-tls-assets\") pod \"prometheus-default-0\" (UID: \"701c19e2-ba3f-4570-9d96-5399d9cb9415\") " pod="service-telemetry/prometheus-default-0" Dec 12 14:27:13 crc kubenswrapper[5108]: I1212 14:27:13.287313 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wkvw\" (UniqueName: \"kubernetes.io/projected/701c19e2-ba3f-4570-9d96-5399d9cb9415-kube-api-access-6wkvw\") pod \"prometheus-default-0\" (UID: \"701c19e2-ba3f-4570-9d96-5399d9cb9415\") " pod="service-telemetry/prometheus-default-0" Dec 12 14:27:13 crc kubenswrapper[5108]: I1212 14:27:13.287349 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-default-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/701c19e2-ba3f-4570-9d96-5399d9cb9415-prometheus-default-rulefiles-0\") pod \"prometheus-default-0\" (UID: \"701c19e2-ba3f-4570-9d96-5399d9cb9415\") " pod="service-telemetry/prometheus-default-0" Dec 12 14:27:13 crc kubenswrapper[5108]: I1212 14:27:13.287377 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/701c19e2-ba3f-4570-9d96-5399d9cb9415-config\") pod \"prometheus-default-0\" (UID: \"701c19e2-ba3f-4570-9d96-5399d9cb9415\") " pod="service-telemetry/prometheus-default-0" Dec 12 14:27:13 crc kubenswrapper[5108]: I1212 14:27:13.287398 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/701c19e2-ba3f-4570-9d96-5399d9cb9415-config-out\") pod \"prometheus-default-0\" (UID: \"701c19e2-ba3f-4570-9d96-5399d9cb9415\") " pod="service-telemetry/prometheus-default-0" Dec 12 14:27:13 crc kubenswrapper[5108]: I1212 14:27:13.287421 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/701c19e2-ba3f-4570-9d96-5399d9cb9415-web-config\") pod \"prometheus-default-0\" (UID: \"701c19e2-ba3f-4570-9d96-5399d9cb9415\") " pod="service-telemetry/prometheus-default-0" Dec 12 14:27:13 crc kubenswrapper[5108]: I1212 14:27:13.287458 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/701c19e2-ba3f-4570-9d96-5399d9cb9415-configmap-serving-certs-ca-bundle\") pod \"prometheus-default-0\" (UID: \"701c19e2-ba3f-4570-9d96-5399d9cb9415\") " pod="service-telemetry/prometheus-default-0" Dec 12 14:27:13 crc kubenswrapper[5108]: I1212 14:27:13.287523 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/701c19e2-ba3f-4570-9d96-5399d9cb9415-secret-default-session-secret\") pod \"prometheus-default-0\" (UID: \"701c19e2-ba3f-4570-9d96-5399d9cb9415\") " pod="service-telemetry/prometheus-default-0" Dec 12 14:27:13 crc kubenswrapper[5108]: I1212 14:27:13.287556 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-1c109e4c-b5be-40fc-9cc6-22c59fd5831d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1c109e4c-b5be-40fc-9cc6-22c59fd5831d\") pod \"prometheus-default-0\" (UID: \"701c19e2-ba3f-4570-9d96-5399d9cb9415\") " pod="service-telemetry/prometheus-default-0" Dec 12 14:27:13 crc kubenswrapper[5108]: I1212 14:27:13.389182 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-1c109e4c-b5be-40fc-9cc6-22c59fd5831d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1c109e4c-b5be-40fc-9cc6-22c59fd5831d\") pod \"prometheus-default-0\" (UID: \"701c19e2-ba3f-4570-9d96-5399d9cb9415\") " pod="service-telemetry/prometheus-default-0" Dec 12 14:27:13 crc kubenswrapper[5108]: I1212 14:27:13.389255 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/701c19e2-ba3f-4570-9d96-5399d9cb9415-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"701c19e2-ba3f-4570-9d96-5399d9cb9415\") " pod="service-telemetry/prometheus-default-0" Dec 12 14:27:13 crc kubenswrapper[5108]: E1212 14:27:13.389386 5108 secret.go:189] Couldn't get secret service-telemetry/default-prometheus-proxy-tls: secret "default-prometheus-proxy-tls" not found Dec 12 14:27:13 crc kubenswrapper[5108]: E1212 14:27:13.389468 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/701c19e2-ba3f-4570-9d96-5399d9cb9415-secret-default-prometheus-proxy-tls podName:701c19e2-ba3f-4570-9d96-5399d9cb9415 nodeName:}" failed. No retries permitted until 2025-12-12 14:27:13.889450835 +0000 UTC m=+986.797441984 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-default-prometheus-proxy-tls" (UniqueName: "kubernetes.io/secret/701c19e2-ba3f-4570-9d96-5399d9cb9415-secret-default-prometheus-proxy-tls") pod "prometheus-default-0" (UID: "701c19e2-ba3f-4570-9d96-5399d9cb9415") : secret "default-prometheus-proxy-tls" not found Dec 12 14:27:13 crc kubenswrapper[5108]: I1212 14:27:13.389829 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/701c19e2-ba3f-4570-9d96-5399d9cb9415-tls-assets\") pod \"prometheus-default-0\" (UID: \"701c19e2-ba3f-4570-9d96-5399d9cb9415\") " pod="service-telemetry/prometheus-default-0" Dec 12 14:27:13 crc kubenswrapper[5108]: I1212 14:27:13.389947 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6wkvw\" (UniqueName: \"kubernetes.io/projected/701c19e2-ba3f-4570-9d96-5399d9cb9415-kube-api-access-6wkvw\") pod \"prometheus-default-0\" (UID: \"701c19e2-ba3f-4570-9d96-5399d9cb9415\") " pod="service-telemetry/prometheus-default-0" Dec 12 14:27:13 crc kubenswrapper[5108]: I1212 14:27:13.390030 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"prometheus-default-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/701c19e2-ba3f-4570-9d96-5399d9cb9415-prometheus-default-rulefiles-0\") pod \"prometheus-default-0\" (UID: \"701c19e2-ba3f-4570-9d96-5399d9cb9415\") " pod="service-telemetry/prometheus-default-0" Dec 12 14:27:13 crc kubenswrapper[5108]: I1212 14:27:13.390144 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/701c19e2-ba3f-4570-9d96-5399d9cb9415-config\") pod \"prometheus-default-0\" (UID: \"701c19e2-ba3f-4570-9d96-5399d9cb9415\") " pod="service-telemetry/prometheus-default-0" Dec 12 14:27:13 crc kubenswrapper[5108]: I1212 14:27:13.390206 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/701c19e2-ba3f-4570-9d96-5399d9cb9415-config-out\") pod \"prometheus-default-0\" (UID: \"701c19e2-ba3f-4570-9d96-5399d9cb9415\") " pod="service-telemetry/prometheus-default-0" Dec 12 14:27:13 crc kubenswrapper[5108]: I1212 14:27:13.390261 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/701c19e2-ba3f-4570-9d96-5399d9cb9415-web-config\") pod \"prometheus-default-0\" (UID: \"701c19e2-ba3f-4570-9d96-5399d9cb9415\") " pod="service-telemetry/prometheus-default-0" Dec 12 14:27:13 crc kubenswrapper[5108]: I1212 14:27:13.390364 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/701c19e2-ba3f-4570-9d96-5399d9cb9415-configmap-serving-certs-ca-bundle\") pod \"prometheus-default-0\" (UID: \"701c19e2-ba3f-4570-9d96-5399d9cb9415\") " pod="service-telemetry/prometheus-default-0" Dec 12 14:27:13 crc kubenswrapper[5108]: I1212 14:27:13.390457 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/701c19e2-ba3f-4570-9d96-5399d9cb9415-secret-default-session-secret\") pod \"prometheus-default-0\" (UID: \"701c19e2-ba3f-4570-9d96-5399d9cb9415\") " pod="service-telemetry/prometheus-default-0" Dec 12 14:27:13 crc kubenswrapper[5108]: I1212 14:27:13.391910 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/701c19e2-ba3f-4570-9d96-5399d9cb9415-configmap-serving-certs-ca-bundle\") pod \"prometheus-default-0\" (UID: \"701c19e2-ba3f-4570-9d96-5399d9cb9415\") " pod="service-telemetry/prometheus-default-0" Dec 12 14:27:13 crc kubenswrapper[5108]: I1212 14:27:13.392169 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"prometheus-default-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/701c19e2-ba3f-4570-9d96-5399d9cb9415-prometheus-default-rulefiles-0\") pod \"prometheus-default-0\" (UID: \"701c19e2-ba3f-4570-9d96-5399d9cb9415\") " pod="service-telemetry/prometheus-default-0" Dec 12 14:27:13 crc kubenswrapper[5108]: I1212 14:27:13.397003 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/701c19e2-ba3f-4570-9d96-5399d9cb9415-config-out\") pod \"prometheus-default-0\" (UID: \"701c19e2-ba3f-4570-9d96-5399d9cb9415\") " pod="service-telemetry/prometheus-default-0" Dec 12 14:27:13 crc kubenswrapper[5108]: I1212 14:27:13.397180 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/701c19e2-ba3f-4570-9d96-5399d9cb9415-tls-assets\") pod \"prometheus-default-0\" (UID: \"701c19e2-ba3f-4570-9d96-5399d9cb9415\") " pod="service-telemetry/prometheus-default-0" Dec 12 14:27:13 crc kubenswrapper[5108]: I1212 14:27:13.397419 5108 csi_attacher.go:373] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 12 14:27:13 crc kubenswrapper[5108]: I1212 14:27:13.397460 5108 operation_generator.go:557] "MountVolume.MountDevice succeeded for volume \"pvc-1c109e4c-b5be-40fc-9cc6-22c59fd5831d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1c109e4c-b5be-40fc-9cc6-22c59fd5831d\") pod \"prometheus-default-0\" (UID: \"701c19e2-ba3f-4570-9d96-5399d9cb9415\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/bdd23c7734948375efe509ffad74279706b19cba6ae9733b7c96beb4dcd1ae6d/globalmount\"" pod="service-telemetry/prometheus-default-0" Dec 12 14:27:13 crc kubenswrapper[5108]: I1212 14:27:13.397498 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/701c19e2-ba3f-4570-9d96-5399d9cb9415-config\") pod \"prometheus-default-0\" (UID: \"701c19e2-ba3f-4570-9d96-5399d9cb9415\") " pod="service-telemetry/prometheus-default-0" Dec 12 14:27:13 crc kubenswrapper[5108]: I1212 14:27:13.397643 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/701c19e2-ba3f-4570-9d96-5399d9cb9415-web-config\") pod \"prometheus-default-0\" (UID: \"701c19e2-ba3f-4570-9d96-5399d9cb9415\") " pod="service-telemetry/prometheus-default-0" Dec 12 14:27:13 crc kubenswrapper[5108]: I1212 14:27:13.399494 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/701c19e2-ba3f-4570-9d96-5399d9cb9415-secret-default-session-secret\") pod \"prometheus-default-0\" (UID: \"701c19e2-ba3f-4570-9d96-5399d9cb9415\") " pod="service-telemetry/prometheus-default-0" Dec 12 14:27:13 crc kubenswrapper[5108]: I1212 14:27:13.424172 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6wkvw\" (UniqueName: \"kubernetes.io/projected/701c19e2-ba3f-4570-9d96-5399d9cb9415-kube-api-access-6wkvw\") pod \"prometheus-default-0\" (UID: \"701c19e2-ba3f-4570-9d96-5399d9cb9415\") " pod="service-telemetry/prometheus-default-0" Dec 12 14:27:13 crc kubenswrapper[5108]: I1212 14:27:13.428751 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-1c109e4c-b5be-40fc-9cc6-22c59fd5831d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1c109e4c-b5be-40fc-9cc6-22c59fd5831d\") pod \"prometheus-default-0\" (UID: \"701c19e2-ba3f-4570-9d96-5399d9cb9415\") " pod="service-telemetry/prometheus-default-0" Dec 12 14:27:13 crc kubenswrapper[5108]: I1212 14:27:13.898319 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/701c19e2-ba3f-4570-9d96-5399d9cb9415-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"701c19e2-ba3f-4570-9d96-5399d9cb9415\") " pod="service-telemetry/prometheus-default-0" Dec 12 14:27:13 crc kubenswrapper[5108]: E1212 14:27:13.898556 5108 secret.go:189] Couldn't get secret service-telemetry/default-prometheus-proxy-tls: secret "default-prometheus-proxy-tls" not found Dec 12 14:27:13 crc kubenswrapper[5108]: E1212 14:27:13.898674 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/701c19e2-ba3f-4570-9d96-5399d9cb9415-secret-default-prometheus-proxy-tls podName:701c19e2-ba3f-4570-9d96-5399d9cb9415 nodeName:}" failed. No retries permitted until 2025-12-12 14:27:14.898648666 +0000 UTC m=+987.806639875 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-default-prometheus-proxy-tls" (UniqueName: "kubernetes.io/secret/701c19e2-ba3f-4570-9d96-5399d9cb9415-secret-default-prometheus-proxy-tls") pod "prometheus-default-0" (UID: "701c19e2-ba3f-4570-9d96-5399d9cb9415") : secret "default-prometheus-proxy-tls" not found Dec 12 14:27:14 crc kubenswrapper[5108]: I1212 14:27:14.913973 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/701c19e2-ba3f-4570-9d96-5399d9cb9415-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"701c19e2-ba3f-4570-9d96-5399d9cb9415\") " pod="service-telemetry/prometheus-default-0" Dec 12 14:27:14 crc kubenswrapper[5108]: E1212 14:27:14.914239 5108 secret.go:189] Couldn't get secret service-telemetry/default-prometheus-proxy-tls: secret "default-prometheus-proxy-tls" not found Dec 12 14:27:14 crc kubenswrapper[5108]: E1212 14:27:14.914435 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/701c19e2-ba3f-4570-9d96-5399d9cb9415-secret-default-prometheus-proxy-tls podName:701c19e2-ba3f-4570-9d96-5399d9cb9415 nodeName:}" failed. No retries permitted until 2025-12-12 14:27:16.91441231 +0000 UTC m=+989.822403469 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-default-prometheus-proxy-tls" (UniqueName: "kubernetes.io/secret/701c19e2-ba3f-4570-9d96-5399d9cb9415-secret-default-prometheus-proxy-tls") pod "prometheus-default-0" (UID: "701c19e2-ba3f-4570-9d96-5399d9cb9415") : secret "default-prometheus-proxy-tls" not found Dec 12 14:27:16 crc kubenswrapper[5108]: I1212 14:27:16.940918 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/701c19e2-ba3f-4570-9d96-5399d9cb9415-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"701c19e2-ba3f-4570-9d96-5399d9cb9415\") " pod="service-telemetry/prometheus-default-0" Dec 12 14:27:16 crc kubenswrapper[5108]: I1212 14:27:16.956460 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/701c19e2-ba3f-4570-9d96-5399d9cb9415-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"701c19e2-ba3f-4570-9d96-5399d9cb9415\") " pod="service-telemetry/prometheus-default-0" Dec 12 14:27:17 crc kubenswrapper[5108]: I1212 14:27:17.036847 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-default-0" Dec 12 14:27:17 crc kubenswrapper[5108]: I1212 14:27:17.288213 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-default-0"] Dec 12 14:27:17 crc kubenswrapper[5108]: I1212 14:27:17.589675 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"701c19e2-ba3f-4570-9d96-5399d9cb9415","Type":"ContainerStarted","Data":"c5359537686f2a78243cc89a5d880306e5f0e22aae53b636f4b69565a06f016b"} Dec 12 14:27:21 crc kubenswrapper[5108]: I1212 14:27:21.619300 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"701c19e2-ba3f-4570-9d96-5399d9cb9415","Type":"ContainerStarted","Data":"6159b592f08b462b3f1709ac8dd1b202b6bc854ca324868a99899136565dc14a"} Dec 12 14:27:23 crc kubenswrapper[5108]: I1212 14:27:23.899986 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-snmp-webhook-6774d8dfbc-9qtf7"] Dec 12 14:27:23 crc kubenswrapper[5108]: I1212 14:27:23.910902 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-snmp-webhook-6774d8dfbc-9qtf7" Dec 12 14:27:23 crc kubenswrapper[5108]: I1212 14:27:23.911201 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-snmp-webhook-6774d8dfbc-9qtf7"] Dec 12 14:27:24 crc kubenswrapper[5108]: I1212 14:27:24.039266 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vwxh\" (UniqueName: \"kubernetes.io/projected/e8a7d866-d128-4dc7-b7e5-b2772a340b3c-kube-api-access-9vwxh\") pod \"default-snmp-webhook-6774d8dfbc-9qtf7\" (UID: \"e8a7d866-d128-4dc7-b7e5-b2772a340b3c\") " pod="service-telemetry/default-snmp-webhook-6774d8dfbc-9qtf7" Dec 12 14:27:24 crc kubenswrapper[5108]: I1212 14:27:24.140426 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9vwxh\" (UniqueName: \"kubernetes.io/projected/e8a7d866-d128-4dc7-b7e5-b2772a340b3c-kube-api-access-9vwxh\") pod \"default-snmp-webhook-6774d8dfbc-9qtf7\" (UID: \"e8a7d866-d128-4dc7-b7e5-b2772a340b3c\") " pod="service-telemetry/default-snmp-webhook-6774d8dfbc-9qtf7" Dec 12 14:27:24 crc kubenswrapper[5108]: I1212 14:27:24.162528 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vwxh\" (UniqueName: \"kubernetes.io/projected/e8a7d866-d128-4dc7-b7e5-b2772a340b3c-kube-api-access-9vwxh\") pod \"default-snmp-webhook-6774d8dfbc-9qtf7\" (UID: \"e8a7d866-d128-4dc7-b7e5-b2772a340b3c\") " pod="service-telemetry/default-snmp-webhook-6774d8dfbc-9qtf7" Dec 12 14:27:24 crc kubenswrapper[5108]: I1212 14:27:24.229061 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-snmp-webhook-6774d8dfbc-9qtf7" Dec 12 14:27:24 crc kubenswrapper[5108]: I1212 14:27:24.428801 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-snmp-webhook-6774d8dfbc-9qtf7"] Dec 12 14:27:24 crc kubenswrapper[5108]: W1212 14:27:24.441510 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode8a7d866_d128_4dc7_b7e5_b2772a340b3c.slice/crio-b9ac3ea8c8d16f487870b76fd04e0ba78f77810a165890876d05b3752d8f9e7e WatchSource:0}: Error finding container b9ac3ea8c8d16f487870b76fd04e0ba78f77810a165890876d05b3752d8f9e7e: Status 404 returned error can't find the container with id b9ac3ea8c8d16f487870b76fd04e0ba78f77810a165890876d05b3752d8f9e7e Dec 12 14:27:24 crc kubenswrapper[5108]: I1212 14:27:24.645797 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-snmp-webhook-6774d8dfbc-9qtf7" event={"ID":"e8a7d866-d128-4dc7-b7e5-b2772a340b3c","Type":"ContainerStarted","Data":"b9ac3ea8c8d16f487870b76fd04e0ba78f77810a165890876d05b3752d8f9e7e"} Dec 12 14:27:27 crc kubenswrapper[5108]: I1212 14:27:27.663418 5108 generic.go:358] "Generic (PLEG): container finished" podID="701c19e2-ba3f-4570-9d96-5399d9cb9415" containerID="6159b592f08b462b3f1709ac8dd1b202b6bc854ca324868a99899136565dc14a" exitCode=0 Dec 12 14:27:27 crc kubenswrapper[5108]: I1212 14:27:27.663499 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"701c19e2-ba3f-4570-9d96-5399d9cb9415","Type":"ContainerDied","Data":"6159b592f08b462b3f1709ac8dd1b202b6bc854ca324868a99899136565dc14a"} Dec 12 14:27:27 crc kubenswrapper[5108]: I1212 14:27:27.699775 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/alertmanager-default-0"] Dec 12 14:27:27 crc kubenswrapper[5108]: I1212 14:27:27.721388 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/alertmanager-default-0"] Dec 12 14:27:27 crc kubenswrapper[5108]: I1212 14:27:27.721595 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/alertmanager-default-0" Dec 12 14:27:27 crc kubenswrapper[5108]: I1212 14:27:27.724051 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"alertmanager-default-cluster-tls-config\"" Dec 12 14:27:27 crc kubenswrapper[5108]: I1212 14:27:27.724348 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"alertmanager-stf-dockercfg-v6sk7\"" Dec 12 14:27:27 crc kubenswrapper[5108]: I1212 14:27:27.724408 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-alertmanager-proxy-tls\"" Dec 12 14:27:27 crc kubenswrapper[5108]: I1212 14:27:27.724512 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"alertmanager-default-tls-assets-0\"" Dec 12 14:27:27 crc kubenswrapper[5108]: I1212 14:27:27.724714 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"alertmanager-default-generated\"" Dec 12 14:27:27 crc kubenswrapper[5108]: I1212 14:27:27.724907 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"alertmanager-default-web-config\"" Dec 12 14:27:27 crc kubenswrapper[5108]: I1212 14:27:27.894190 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqf8v\" (UniqueName: \"kubernetes.io/projected/84895b60-9297-4ab7-a635-b35d55ff1b34-kube-api-access-gqf8v\") pod \"alertmanager-default-0\" (UID: \"84895b60-9297-4ab7-a635-b35d55ff1b34\") " pod="service-telemetry/alertmanager-default-0" Dec 12 14:27:27 crc kubenswrapper[5108]: I1212 14:27:27.894255 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/84895b60-9297-4ab7-a635-b35d55ff1b34-cluster-tls-config\") pod \"alertmanager-default-0\" (UID: \"84895b60-9297-4ab7-a635-b35d55ff1b34\") " pod="service-telemetry/alertmanager-default-0" Dec 12 14:27:27 crc kubenswrapper[5108]: I1212 14:27:27.894378 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/84895b60-9297-4ab7-a635-b35d55ff1b34-config-out\") pod \"alertmanager-default-0\" (UID: \"84895b60-9297-4ab7-a635-b35d55ff1b34\") " pod="service-telemetry/alertmanager-default-0" Dec 12 14:27:27 crc kubenswrapper[5108]: I1212 14:27:27.894422 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/84895b60-9297-4ab7-a635-b35d55ff1b34-tls-assets\") pod \"alertmanager-default-0\" (UID: \"84895b60-9297-4ab7-a635-b35d55ff1b34\") " pod="service-telemetry/alertmanager-default-0" Dec 12 14:27:27 crc kubenswrapper[5108]: I1212 14:27:27.894647 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/84895b60-9297-4ab7-a635-b35d55ff1b34-config-volume\") pod \"alertmanager-default-0\" (UID: \"84895b60-9297-4ab7-a635-b35d55ff1b34\") " pod="service-telemetry/alertmanager-default-0" Dec 12 14:27:27 crc kubenswrapper[5108]: I1212 14:27:27.894757 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/84895b60-9297-4ab7-a635-b35d55ff1b34-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"84895b60-9297-4ab7-a635-b35d55ff1b34\") " pod="service-telemetry/alertmanager-default-0" Dec 12 14:27:27 crc kubenswrapper[5108]: I1212 14:27:27.894825 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/84895b60-9297-4ab7-a635-b35d55ff1b34-secret-default-session-secret\") pod \"alertmanager-default-0\" (UID: \"84895b60-9297-4ab7-a635-b35d55ff1b34\") " pod="service-telemetry/alertmanager-default-0" Dec 12 14:27:27 crc kubenswrapper[5108]: I1212 14:27:27.894846 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b2d0ddfc-debc-417f-bb9d-aff2e949f329\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b2d0ddfc-debc-417f-bb9d-aff2e949f329\") pod \"alertmanager-default-0\" (UID: \"84895b60-9297-4ab7-a635-b35d55ff1b34\") " pod="service-telemetry/alertmanager-default-0" Dec 12 14:27:27 crc kubenswrapper[5108]: I1212 14:27:27.894879 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/84895b60-9297-4ab7-a635-b35d55ff1b34-web-config\") pod \"alertmanager-default-0\" (UID: \"84895b60-9297-4ab7-a635-b35d55ff1b34\") " pod="service-telemetry/alertmanager-default-0" Dec 12 14:27:27 crc kubenswrapper[5108]: I1212 14:27:27.996754 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/84895b60-9297-4ab7-a635-b35d55ff1b34-config-volume\") pod \"alertmanager-default-0\" (UID: \"84895b60-9297-4ab7-a635-b35d55ff1b34\") " pod="service-telemetry/alertmanager-default-0" Dec 12 14:27:27 crc kubenswrapper[5108]: I1212 14:27:27.996927 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/84895b60-9297-4ab7-a635-b35d55ff1b34-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"84895b60-9297-4ab7-a635-b35d55ff1b34\") " pod="service-telemetry/alertmanager-default-0" Dec 12 14:27:27 crc kubenswrapper[5108]: I1212 14:27:27.996978 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/84895b60-9297-4ab7-a635-b35d55ff1b34-secret-default-session-secret\") pod \"alertmanager-default-0\" (UID: \"84895b60-9297-4ab7-a635-b35d55ff1b34\") " pod="service-telemetry/alertmanager-default-0" Dec 12 14:27:27 crc kubenswrapper[5108]: I1212 14:27:27.997010 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b2d0ddfc-debc-417f-bb9d-aff2e949f329\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b2d0ddfc-debc-417f-bb9d-aff2e949f329\") pod \"alertmanager-default-0\" (UID: \"84895b60-9297-4ab7-a635-b35d55ff1b34\") " pod="service-telemetry/alertmanager-default-0" Dec 12 14:27:27 crc kubenswrapper[5108]: I1212 14:27:27.997064 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/84895b60-9297-4ab7-a635-b35d55ff1b34-web-config\") pod \"alertmanager-default-0\" (UID: \"84895b60-9297-4ab7-a635-b35d55ff1b34\") " pod="service-telemetry/alertmanager-default-0" Dec 12 14:27:27 crc kubenswrapper[5108]: I1212 14:27:27.997209 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gqf8v\" (UniqueName: \"kubernetes.io/projected/84895b60-9297-4ab7-a635-b35d55ff1b34-kube-api-access-gqf8v\") pod \"alertmanager-default-0\" (UID: \"84895b60-9297-4ab7-a635-b35d55ff1b34\") " pod="service-telemetry/alertmanager-default-0" Dec 12 14:27:27 crc kubenswrapper[5108]: I1212 14:27:27.997234 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/84895b60-9297-4ab7-a635-b35d55ff1b34-cluster-tls-config\") pod \"alertmanager-default-0\" (UID: \"84895b60-9297-4ab7-a635-b35d55ff1b34\") " pod="service-telemetry/alertmanager-default-0" Dec 12 14:27:27 crc kubenswrapper[5108]: I1212 14:27:27.997294 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/84895b60-9297-4ab7-a635-b35d55ff1b34-config-out\") pod \"alertmanager-default-0\" (UID: \"84895b60-9297-4ab7-a635-b35d55ff1b34\") " pod="service-telemetry/alertmanager-default-0" Dec 12 14:27:27 crc kubenswrapper[5108]: I1212 14:27:27.997329 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/84895b60-9297-4ab7-a635-b35d55ff1b34-tls-assets\") pod \"alertmanager-default-0\" (UID: \"84895b60-9297-4ab7-a635-b35d55ff1b34\") " pod="service-telemetry/alertmanager-default-0" Dec 12 14:27:27 crc kubenswrapper[5108]: E1212 14:27:27.997113 5108 secret.go:189] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Dec 12 14:27:27 crc kubenswrapper[5108]: E1212 14:27:27.998390 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/84895b60-9297-4ab7-a635-b35d55ff1b34-secret-default-alertmanager-proxy-tls podName:84895b60-9297-4ab7-a635-b35d55ff1b34 nodeName:}" failed. No retries permitted until 2025-12-12 14:27:28.498371949 +0000 UTC m=+1001.406363108 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/84895b60-9297-4ab7-a635-b35d55ff1b34-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "84895b60-9297-4ab7-a635-b35d55ff1b34") : secret "default-alertmanager-proxy-tls" not found Dec 12 14:27:28 crc kubenswrapper[5108]: I1212 14:27:28.004665 5108 csi_attacher.go:373] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 12 14:27:28 crc kubenswrapper[5108]: I1212 14:27:28.004712 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/84895b60-9297-4ab7-a635-b35d55ff1b34-config-volume\") pod \"alertmanager-default-0\" (UID: \"84895b60-9297-4ab7-a635-b35d55ff1b34\") " pod="service-telemetry/alertmanager-default-0" Dec 12 14:27:28 crc kubenswrapper[5108]: I1212 14:27:28.004739 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/84895b60-9297-4ab7-a635-b35d55ff1b34-cluster-tls-config\") pod \"alertmanager-default-0\" (UID: \"84895b60-9297-4ab7-a635-b35d55ff1b34\") " pod="service-telemetry/alertmanager-default-0" Dec 12 14:27:28 crc kubenswrapper[5108]: I1212 14:27:28.004716 5108 operation_generator.go:557] "MountVolume.MountDevice succeeded for volume \"pvc-b2d0ddfc-debc-417f-bb9d-aff2e949f329\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b2d0ddfc-debc-417f-bb9d-aff2e949f329\") pod \"alertmanager-default-0\" (UID: \"84895b60-9297-4ab7-a635-b35d55ff1b34\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/eeb5ebb7df37fe2163bf71fee03c9419b4a07252aacfcb322aa8396589bf99a2/globalmount\"" pod="service-telemetry/alertmanager-default-0" Dec 12 14:27:28 crc kubenswrapper[5108]: I1212 14:27:28.005317 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/84895b60-9297-4ab7-a635-b35d55ff1b34-config-out\") pod \"alertmanager-default-0\" (UID: \"84895b60-9297-4ab7-a635-b35d55ff1b34\") " pod="service-telemetry/alertmanager-default-0" Dec 12 14:27:28 crc kubenswrapper[5108]: I1212 14:27:28.005710 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/84895b60-9297-4ab7-a635-b35d55ff1b34-tls-assets\") pod \"alertmanager-default-0\" (UID: \"84895b60-9297-4ab7-a635-b35d55ff1b34\") " pod="service-telemetry/alertmanager-default-0" Dec 12 14:27:28 crc kubenswrapper[5108]: I1212 14:27:28.018861 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/84895b60-9297-4ab7-a635-b35d55ff1b34-secret-default-session-secret\") pod \"alertmanager-default-0\" (UID: \"84895b60-9297-4ab7-a635-b35d55ff1b34\") " pod="service-telemetry/alertmanager-default-0" Dec 12 14:27:28 crc kubenswrapper[5108]: I1212 14:27:28.019824 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/84895b60-9297-4ab7-a635-b35d55ff1b34-web-config\") pod \"alertmanager-default-0\" (UID: \"84895b60-9297-4ab7-a635-b35d55ff1b34\") " pod="service-telemetry/alertmanager-default-0" Dec 12 14:27:28 crc kubenswrapper[5108]: I1212 14:27:28.028567 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqf8v\" (UniqueName: \"kubernetes.io/projected/84895b60-9297-4ab7-a635-b35d55ff1b34-kube-api-access-gqf8v\") pod \"alertmanager-default-0\" (UID: \"84895b60-9297-4ab7-a635-b35d55ff1b34\") " pod="service-telemetry/alertmanager-default-0" Dec 12 14:27:28 crc kubenswrapper[5108]: I1212 14:27:28.057673 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-b2d0ddfc-debc-417f-bb9d-aff2e949f329\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b2d0ddfc-debc-417f-bb9d-aff2e949f329\") pod \"alertmanager-default-0\" (UID: \"84895b60-9297-4ab7-a635-b35d55ff1b34\") " pod="service-telemetry/alertmanager-default-0" Dec 12 14:27:28 crc kubenswrapper[5108]: I1212 14:27:28.503582 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/84895b60-9297-4ab7-a635-b35d55ff1b34-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"84895b60-9297-4ab7-a635-b35d55ff1b34\") " pod="service-telemetry/alertmanager-default-0" Dec 12 14:27:28 crc kubenswrapper[5108]: E1212 14:27:28.503789 5108 secret.go:189] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Dec 12 14:27:28 crc kubenswrapper[5108]: E1212 14:27:28.503860 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/84895b60-9297-4ab7-a635-b35d55ff1b34-secret-default-alertmanager-proxy-tls podName:84895b60-9297-4ab7-a635-b35d55ff1b34 nodeName:}" failed. No retries permitted until 2025-12-12 14:27:29.503842761 +0000 UTC m=+1002.411833910 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/84895b60-9297-4ab7-a635-b35d55ff1b34-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "84895b60-9297-4ab7-a635-b35d55ff1b34") : secret "default-alertmanager-proxy-tls" not found Dec 12 14:27:29 crc kubenswrapper[5108]: I1212 14:27:29.515761 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/84895b60-9297-4ab7-a635-b35d55ff1b34-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"84895b60-9297-4ab7-a635-b35d55ff1b34\") " pod="service-telemetry/alertmanager-default-0" Dec 12 14:27:29 crc kubenswrapper[5108]: E1212 14:27:29.516052 5108 secret.go:189] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Dec 12 14:27:29 crc kubenswrapper[5108]: E1212 14:27:29.516177 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/84895b60-9297-4ab7-a635-b35d55ff1b34-secret-default-alertmanager-proxy-tls podName:84895b60-9297-4ab7-a635-b35d55ff1b34 nodeName:}" failed. No retries permitted until 2025-12-12 14:27:31.516139491 +0000 UTC m=+1004.424130650 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/84895b60-9297-4ab7-a635-b35d55ff1b34-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "84895b60-9297-4ab7-a635-b35d55ff1b34") : secret "default-alertmanager-proxy-tls" not found Dec 12 14:27:30 crc kubenswrapper[5108]: E1212 14:27:30.418600 5108 certificate_manager.go:613] "Certificate request was not signed" err="timed out waiting for the condition" logger="kubernetes.io/kubelet-serving.UnhandledError" Dec 12 14:27:31 crc kubenswrapper[5108]: I1212 14:27:31.544857 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/84895b60-9297-4ab7-a635-b35d55ff1b34-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"84895b60-9297-4ab7-a635-b35d55ff1b34\") " pod="service-telemetry/alertmanager-default-0" Dec 12 14:27:31 crc kubenswrapper[5108]: I1212 14:27:31.551985 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/84895b60-9297-4ab7-a635-b35d55ff1b34-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"84895b60-9297-4ab7-a635-b35d55ff1b34\") " pod="service-telemetry/alertmanager-default-0" Dec 12 14:27:31 crc kubenswrapper[5108]: I1212 14:27:31.678420 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/alertmanager-default-0" Dec 12 14:27:32 crc kubenswrapper[5108]: I1212 14:27:32.614236 5108 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kubelet-serving" Dec 12 14:27:32 crc kubenswrapper[5108]: I1212 14:27:32.623071 5108 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Dec 12 14:27:32 crc kubenswrapper[5108]: I1212 14:27:32.641200 5108 ???:1] "http: TLS handshake error from 192.168.126.11:51972: no serving certificate available for the kubelet" Dec 12 14:27:32 crc kubenswrapper[5108]: I1212 14:27:32.668610 5108 ???:1] "http: TLS handshake error from 192.168.126.11:51982: no serving certificate available for the kubelet" Dec 12 14:27:32 crc kubenswrapper[5108]: I1212 14:27:32.744471 5108 ???:1] "http: TLS handshake error from 192.168.126.11:51990: no serving certificate available for the kubelet" Dec 12 14:27:32 crc kubenswrapper[5108]: I1212 14:27:32.785620 5108 ???:1] "http: TLS handshake error from 192.168.126.11:51992: no serving certificate available for the kubelet" Dec 12 14:27:32 crc kubenswrapper[5108]: I1212 14:27:32.849685 5108 ???:1] "http: TLS handshake error from 192.168.126.11:52004: no serving certificate available for the kubelet" Dec 12 14:27:32 crc kubenswrapper[5108]: I1212 14:27:32.951113 5108 ???:1] "http: TLS handshake error from 192.168.126.11:52006: no serving certificate available for the kubelet" Dec 12 14:27:33 crc kubenswrapper[5108]: I1212 14:27:33.134511 5108 ???:1] "http: TLS handshake error from 192.168.126.11:52010: no serving certificate available for the kubelet" Dec 12 14:27:33 crc kubenswrapper[5108]: I1212 14:27:33.479409 5108 ???:1] "http: TLS handshake error from 192.168.126.11:52018: no serving certificate available for the kubelet" Dec 12 14:27:34 crc kubenswrapper[5108]: I1212 14:27:34.143503 5108 ???:1] "http: TLS handshake error from 192.168.126.11:52022: no serving certificate available for the kubelet" Dec 12 14:27:35 crc kubenswrapper[5108]: I1212 14:27:35.447150 5108 ???:1] "http: TLS handshake error from 192.168.126.11:45332: no serving certificate available for the kubelet" Dec 12 14:27:35 crc kubenswrapper[5108]: I1212 14:27:35.535367 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/alertmanager-default-0"] Dec 12 14:27:37 crc kubenswrapper[5108]: W1212 14:27:37.089500 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod84895b60_9297_4ab7_a635_b35d55ff1b34.slice/crio-8b30a6ff17f20a929f68830dc533e16cb5e4ad0f02d8f1d67ee899ffd37837d5 WatchSource:0}: Error finding container 8b30a6ff17f20a929f68830dc533e16cb5e4ad0f02d8f1d67ee899ffd37837d5: Status 404 returned error can't find the container with id 8b30a6ff17f20a929f68830dc533e16cb5e4ad0f02d8f1d67ee899ffd37837d5 Dec 12 14:27:37 crc kubenswrapper[5108]: I1212 14:27:37.737878 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"84895b60-9297-4ab7-a635-b35d55ff1b34","Type":"ContainerStarted","Data":"8b30a6ff17f20a929f68830dc533e16cb5e4ad0f02d8f1d67ee899ffd37837d5"} Dec 12 14:27:37 crc kubenswrapper[5108]: I1212 14:27:37.739224 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-snmp-webhook-6774d8dfbc-9qtf7" event={"ID":"e8a7d866-d128-4dc7-b7e5-b2772a340b3c","Type":"ContainerStarted","Data":"ff773762bad458259a90c1c93c7f446807ea5d3b70f039d04125ed5dcbb19147"} Dec 12 14:27:37 crc kubenswrapper[5108]: I1212 14:27:37.766145 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-snmp-webhook-6774d8dfbc-9qtf7" podStartSLOduration=3.972899584 podStartE2EDuration="14.766062209s" podCreationTimestamp="2025-12-12 14:27:23 +0000 UTC" firstStartedPulling="2025-12-12 14:27:24.445296089 +0000 UTC m=+997.353287248" lastFinishedPulling="2025-12-12 14:27:35.238458714 +0000 UTC m=+1008.146449873" observedRunningTime="2025-12-12 14:27:37.761105335 +0000 UTC m=+1010.669096494" watchObservedRunningTime="2025-12-12 14:27:37.766062209 +0000 UTC m=+1010.674053378" Dec 12 14:27:38 crc kubenswrapper[5108]: I1212 14:27:38.028516 5108 ???:1] "http: TLS handshake error from 192.168.126.11:45338: no serving certificate available for the kubelet" Dec 12 14:27:42 crc kubenswrapper[5108]: I1212 14:27:42.868652 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"701c19e2-ba3f-4570-9d96-5399d9cb9415","Type":"ContainerStarted","Data":"1d9692208fbf1d493e33683b95913e1ffdfb2980bea297a340e925e837e9b0f9"} Dec 12 14:27:43 crc kubenswrapper[5108]: I1212 14:27:43.175567 5108 ???:1] "http: TLS handshake error from 192.168.126.11:45354: no serving certificate available for the kubelet" Dec 12 14:27:43 crc kubenswrapper[5108]: I1212 14:27:43.895994 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"84895b60-9297-4ab7-a635-b35d55ff1b34","Type":"ContainerStarted","Data":"980f93a7ba6854d31473f01ea42dc8d326763081eb6cf0e33722b146bbc2ffd3"} Dec 12 14:27:44 crc kubenswrapper[5108]: I1212 14:27:44.915620 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"701c19e2-ba3f-4570-9d96-5399d9cb9415","Type":"ContainerStarted","Data":"0f68cb3acfebb011b1e2b185ff0f53a2de9c69ef41f103659b9b01bc0a2fc347"} Dec 12 14:27:45 crc kubenswrapper[5108]: I1212 14:27:45.054285 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-l54mr"] Dec 12 14:27:45 crc kubenswrapper[5108]: I1212 14:27:45.070237 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-l54mr"] Dec 12 14:27:45 crc kubenswrapper[5108]: I1212 14:27:45.070460 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-l54mr" Dec 12 14:27:45 crc kubenswrapper[5108]: I1212 14:27:45.072499 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-cloud1-coll-meter-proxy-tls\"" Dec 12 14:27:45 crc kubenswrapper[5108]: I1212 14:27:45.072714 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"smart-gateway-dockercfg-fkn44\"" Dec 12 14:27:45 crc kubenswrapper[5108]: I1212 14:27:45.072807 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-cloud1-coll-meter-sg-core-configmap\"" Dec 12 14:27:45 crc kubenswrapper[5108]: I1212 14:27:45.072900 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"smart-gateway-session-secret\"" Dec 12 14:27:45 crc kubenswrapper[5108]: I1212 14:27:45.137933 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/645b4830-437b-47e6-bafc-73f709b51ece-session-secret\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-l54mr\" (UID: \"645b4830-437b-47e6-bafc-73f709b51ece\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-l54mr" Dec 12 14:27:45 crc kubenswrapper[5108]: I1212 14:27:45.138265 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mg625\" (UniqueName: \"kubernetes.io/projected/645b4830-437b-47e6-bafc-73f709b51ece-kube-api-access-mg625\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-l54mr\" (UID: \"645b4830-437b-47e6-bafc-73f709b51ece\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-l54mr" Dec 12 14:27:45 crc kubenswrapper[5108]: I1212 14:27:45.138466 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/645b4830-437b-47e6-bafc-73f709b51ece-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-l54mr\" (UID: \"645b4830-437b-47e6-bafc-73f709b51ece\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-l54mr" Dec 12 14:27:45 crc kubenswrapper[5108]: I1212 14:27:45.138656 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/645b4830-437b-47e6-bafc-73f709b51ece-sg-core-config\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-l54mr\" (UID: \"645b4830-437b-47e6-bafc-73f709b51ece\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-l54mr" Dec 12 14:27:45 crc kubenswrapper[5108]: I1212 14:27:45.138707 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/645b4830-437b-47e6-bafc-73f709b51ece-socket-dir\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-l54mr\" (UID: \"645b4830-437b-47e6-bafc-73f709b51ece\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-l54mr" Dec 12 14:27:45 crc kubenswrapper[5108]: I1212 14:27:45.240663 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/645b4830-437b-47e6-bafc-73f709b51ece-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-l54mr\" (UID: \"645b4830-437b-47e6-bafc-73f709b51ece\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-l54mr" Dec 12 14:27:45 crc kubenswrapper[5108]: I1212 14:27:45.240747 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/645b4830-437b-47e6-bafc-73f709b51ece-sg-core-config\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-l54mr\" (UID: \"645b4830-437b-47e6-bafc-73f709b51ece\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-l54mr" Dec 12 14:27:45 crc kubenswrapper[5108]: I1212 14:27:45.240782 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/645b4830-437b-47e6-bafc-73f709b51ece-socket-dir\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-l54mr\" (UID: \"645b4830-437b-47e6-bafc-73f709b51ece\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-l54mr" Dec 12 14:27:45 crc kubenswrapper[5108]: I1212 14:27:45.240871 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/645b4830-437b-47e6-bafc-73f709b51ece-session-secret\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-l54mr\" (UID: \"645b4830-437b-47e6-bafc-73f709b51ece\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-l54mr" Dec 12 14:27:45 crc kubenswrapper[5108]: I1212 14:27:45.240920 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mg625\" (UniqueName: \"kubernetes.io/projected/645b4830-437b-47e6-bafc-73f709b51ece-kube-api-access-mg625\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-l54mr\" (UID: \"645b4830-437b-47e6-bafc-73f709b51ece\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-l54mr" Dec 12 14:27:45 crc kubenswrapper[5108]: E1212 14:27:45.241126 5108 secret.go:189] Couldn't get secret service-telemetry/default-cloud1-coll-meter-proxy-tls: secret "default-cloud1-coll-meter-proxy-tls" not found Dec 12 14:27:45 crc kubenswrapper[5108]: E1212 14:27:45.241306 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/645b4830-437b-47e6-bafc-73f709b51ece-default-cloud1-coll-meter-proxy-tls podName:645b4830-437b-47e6-bafc-73f709b51ece nodeName:}" failed. No retries permitted until 2025-12-12 14:27:45.741268 +0000 UTC m=+1018.649259159 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-cloud1-coll-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/645b4830-437b-47e6-bafc-73f709b51ece-default-cloud1-coll-meter-proxy-tls") pod "default-cloud1-coll-meter-smartgateway-787645d794-l54mr" (UID: "645b4830-437b-47e6-bafc-73f709b51ece") : secret "default-cloud1-coll-meter-proxy-tls" not found Dec 12 14:27:45 crc kubenswrapper[5108]: I1212 14:27:45.241587 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/645b4830-437b-47e6-bafc-73f709b51ece-socket-dir\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-l54mr\" (UID: \"645b4830-437b-47e6-bafc-73f709b51ece\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-l54mr" Dec 12 14:27:45 crc kubenswrapper[5108]: I1212 14:27:45.246693 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/645b4830-437b-47e6-bafc-73f709b51ece-sg-core-config\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-l54mr\" (UID: \"645b4830-437b-47e6-bafc-73f709b51ece\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-l54mr" Dec 12 14:27:45 crc kubenswrapper[5108]: I1212 14:27:45.249698 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/645b4830-437b-47e6-bafc-73f709b51ece-session-secret\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-l54mr\" (UID: \"645b4830-437b-47e6-bafc-73f709b51ece\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-l54mr" Dec 12 14:27:45 crc kubenswrapper[5108]: I1212 14:27:45.270534 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mg625\" (UniqueName: \"kubernetes.io/projected/645b4830-437b-47e6-bafc-73f709b51ece-kube-api-access-mg625\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-l54mr\" (UID: \"645b4830-437b-47e6-bafc-73f709b51ece\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-l54mr" Dec 12 14:27:45 crc kubenswrapper[5108]: I1212 14:27:45.747582 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/645b4830-437b-47e6-bafc-73f709b51ece-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-l54mr\" (UID: \"645b4830-437b-47e6-bafc-73f709b51ece\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-l54mr" Dec 12 14:27:45 crc kubenswrapper[5108]: E1212 14:27:45.747759 5108 secret.go:189] Couldn't get secret service-telemetry/default-cloud1-coll-meter-proxy-tls: secret "default-cloud1-coll-meter-proxy-tls" not found Dec 12 14:27:45 crc kubenswrapper[5108]: E1212 14:27:45.747847 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/645b4830-437b-47e6-bafc-73f709b51ece-default-cloud1-coll-meter-proxy-tls podName:645b4830-437b-47e6-bafc-73f709b51ece nodeName:}" failed. No retries permitted until 2025-12-12 14:27:46.747825101 +0000 UTC m=+1019.655816260 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "default-cloud1-coll-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/645b4830-437b-47e6-bafc-73f709b51ece-default-cloud1-coll-meter-proxy-tls") pod "default-cloud1-coll-meter-smartgateway-787645d794-l54mr" (UID: "645b4830-437b-47e6-bafc-73f709b51ece") : secret "default-cloud1-coll-meter-proxy-tls" not found Dec 12 14:27:46 crc kubenswrapper[5108]: I1212 14:27:46.778564 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/645b4830-437b-47e6-bafc-73f709b51ece-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-l54mr\" (UID: \"645b4830-437b-47e6-bafc-73f709b51ece\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-l54mr" Dec 12 14:27:46 crc kubenswrapper[5108]: I1212 14:27:46.784285 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/645b4830-437b-47e6-bafc-73f709b51ece-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-l54mr\" (UID: \"645b4830-437b-47e6-bafc-73f709b51ece\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-l54mr" Dec 12 14:27:46 crc kubenswrapper[5108]: I1212 14:27:46.895599 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-l54mr" Dec 12 14:27:47 crc kubenswrapper[5108]: I1212 14:27:47.679596 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-l54mr"] Dec 12 14:27:47 crc kubenswrapper[5108]: W1212 14:27:47.686588 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod645b4830_437b_47e6_bafc_73f709b51ece.slice/crio-0de29c17a05b338056bb23f896a83c345089864ac425b01fa2bc74836778a4d4 WatchSource:0}: Error finding container 0de29c17a05b338056bb23f896a83c345089864ac425b01fa2bc74836778a4d4: Status 404 returned error can't find the container with id 0de29c17a05b338056bb23f896a83c345089864ac425b01fa2bc74836778a4d4 Dec 12 14:27:47 crc kubenswrapper[5108]: I1212 14:27:47.946910 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-l54mr" event={"ID":"645b4830-437b-47e6-bafc-73f709b51ece","Type":"ContainerStarted","Data":"0de29c17a05b338056bb23f896a83c345089864ac425b01fa2bc74836778a4d4"} Dec 12 14:27:49 crc kubenswrapper[5108]: I1212 14:27:49.743017 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-67mc2"] Dec 12 14:27:49 crc kubenswrapper[5108]: I1212 14:27:49.881485 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-67mc2"] Dec 12 14:27:49 crc kubenswrapper[5108]: I1212 14:27:49.881640 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-67mc2" Dec 12 14:27:49 crc kubenswrapper[5108]: I1212 14:27:49.883372 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-cloud1-ceil-meter-sg-core-configmap\"" Dec 12 14:27:49 crc kubenswrapper[5108]: I1212 14:27:49.883634 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-cloud1-ceil-meter-proxy-tls\"" Dec 12 14:27:49 crc kubenswrapper[5108]: I1212 14:27:49.909117 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blkbx\" (UniqueName: \"kubernetes.io/projected/d9667806-a274-49a5-a527-cd0b9c72cd19-kube-api-access-blkbx\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-67mc2\" (UID: \"d9667806-a274-49a5-a527-cd0b9c72cd19\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-67mc2" Dec 12 14:27:49 crc kubenswrapper[5108]: I1212 14:27:49.909187 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/d9667806-a274-49a5-a527-cd0b9c72cd19-socket-dir\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-67mc2\" (UID: \"d9667806-a274-49a5-a527-cd0b9c72cd19\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-67mc2" Dec 12 14:27:49 crc kubenswrapper[5108]: I1212 14:27:49.909215 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/d9667806-a274-49a5-a527-cd0b9c72cd19-sg-core-config\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-67mc2\" (UID: \"d9667806-a274-49a5-a527-cd0b9c72cd19\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-67mc2" Dec 12 14:27:49 crc kubenswrapper[5108]: I1212 14:27:49.909244 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/d9667806-a274-49a5-a527-cd0b9c72cd19-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-67mc2\" (UID: \"d9667806-a274-49a5-a527-cd0b9c72cd19\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-67mc2" Dec 12 14:27:49 crc kubenswrapper[5108]: I1212 14:27:49.909260 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/d9667806-a274-49a5-a527-cd0b9c72cd19-session-secret\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-67mc2\" (UID: \"d9667806-a274-49a5-a527-cd0b9c72cd19\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-67mc2" Dec 12 14:27:50 crc kubenswrapper[5108]: I1212 14:27:50.010020 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/d9667806-a274-49a5-a527-cd0b9c72cd19-sg-core-config\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-67mc2\" (UID: \"d9667806-a274-49a5-a527-cd0b9c72cd19\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-67mc2" Dec 12 14:27:50 crc kubenswrapper[5108]: I1212 14:27:50.010089 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/d9667806-a274-49a5-a527-cd0b9c72cd19-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-67mc2\" (UID: \"d9667806-a274-49a5-a527-cd0b9c72cd19\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-67mc2" Dec 12 14:27:50 crc kubenswrapper[5108]: I1212 14:27:50.010112 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/d9667806-a274-49a5-a527-cd0b9c72cd19-session-secret\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-67mc2\" (UID: \"d9667806-a274-49a5-a527-cd0b9c72cd19\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-67mc2" Dec 12 14:27:50 crc kubenswrapper[5108]: I1212 14:27:50.010178 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-blkbx\" (UniqueName: \"kubernetes.io/projected/d9667806-a274-49a5-a527-cd0b9c72cd19-kube-api-access-blkbx\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-67mc2\" (UID: \"d9667806-a274-49a5-a527-cd0b9c72cd19\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-67mc2" Dec 12 14:27:50 crc kubenswrapper[5108]: I1212 14:27:50.010225 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/d9667806-a274-49a5-a527-cd0b9c72cd19-socket-dir\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-67mc2\" (UID: \"d9667806-a274-49a5-a527-cd0b9c72cd19\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-67mc2" Dec 12 14:27:50 crc kubenswrapper[5108]: I1212 14:27:50.010613 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/d9667806-a274-49a5-a527-cd0b9c72cd19-socket-dir\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-67mc2\" (UID: \"d9667806-a274-49a5-a527-cd0b9c72cd19\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-67mc2" Dec 12 14:27:50 crc kubenswrapper[5108]: I1212 14:27:50.011317 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/d9667806-a274-49a5-a527-cd0b9c72cd19-sg-core-config\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-67mc2\" (UID: \"d9667806-a274-49a5-a527-cd0b9c72cd19\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-67mc2" Dec 12 14:27:50 crc kubenswrapper[5108]: E1212 14:27:50.011390 5108 secret.go:189] Couldn't get secret service-telemetry/default-cloud1-ceil-meter-proxy-tls: secret "default-cloud1-ceil-meter-proxy-tls" not found Dec 12 14:27:50 crc kubenswrapper[5108]: E1212 14:27:50.011468 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d9667806-a274-49a5-a527-cd0b9c72cd19-default-cloud1-ceil-meter-proxy-tls podName:d9667806-a274-49a5-a527-cd0b9c72cd19 nodeName:}" failed. No retries permitted until 2025-12-12 14:27:50.511454177 +0000 UTC m=+1023.419445336 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-cloud1-ceil-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/d9667806-a274-49a5-a527-cd0b9c72cd19-default-cloud1-ceil-meter-proxy-tls") pod "default-cloud1-ceil-meter-smartgateway-545b564d9f-67mc2" (UID: "d9667806-a274-49a5-a527-cd0b9c72cd19") : secret "default-cloud1-ceil-meter-proxy-tls" not found Dec 12 14:27:50 crc kubenswrapper[5108]: I1212 14:27:50.017116 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/d9667806-a274-49a5-a527-cd0b9c72cd19-session-secret\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-67mc2\" (UID: \"d9667806-a274-49a5-a527-cd0b9c72cd19\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-67mc2" Dec 12 14:27:50 crc kubenswrapper[5108]: I1212 14:27:50.030285 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-blkbx\" (UniqueName: \"kubernetes.io/projected/d9667806-a274-49a5-a527-cd0b9c72cd19-kube-api-access-blkbx\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-67mc2\" (UID: \"d9667806-a274-49a5-a527-cd0b9c72cd19\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-67mc2" Dec 12 14:27:50 crc kubenswrapper[5108]: I1212 14:27:50.516148 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/d9667806-a274-49a5-a527-cd0b9c72cd19-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-67mc2\" (UID: \"d9667806-a274-49a5-a527-cd0b9c72cd19\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-67mc2" Dec 12 14:27:50 crc kubenswrapper[5108]: E1212 14:27:50.516387 5108 secret.go:189] Couldn't get secret service-telemetry/default-cloud1-ceil-meter-proxy-tls: secret "default-cloud1-ceil-meter-proxy-tls" not found Dec 12 14:27:50 crc kubenswrapper[5108]: E1212 14:27:50.516501 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d9667806-a274-49a5-a527-cd0b9c72cd19-default-cloud1-ceil-meter-proxy-tls podName:d9667806-a274-49a5-a527-cd0b9c72cd19 nodeName:}" failed. No retries permitted until 2025-12-12 14:27:51.516476306 +0000 UTC m=+1024.424467485 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "default-cloud1-ceil-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/d9667806-a274-49a5-a527-cd0b9c72cd19-default-cloud1-ceil-meter-proxy-tls") pod "default-cloud1-ceil-meter-smartgateway-545b564d9f-67mc2" (UID: "d9667806-a274-49a5-a527-cd0b9c72cd19") : secret "default-cloud1-ceil-meter-proxy-tls" not found Dec 12 14:27:50 crc kubenswrapper[5108]: I1212 14:27:50.970008 5108 generic.go:358] "Generic (PLEG): container finished" podID="84895b60-9297-4ab7-a635-b35d55ff1b34" containerID="980f93a7ba6854d31473f01ea42dc8d326763081eb6cf0e33722b146bbc2ffd3" exitCode=0 Dec 12 14:27:50 crc kubenswrapper[5108]: I1212 14:27:50.970100 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"84895b60-9297-4ab7-a635-b35d55ff1b34","Type":"ContainerDied","Data":"980f93a7ba6854d31473f01ea42dc8d326763081eb6cf0e33722b146bbc2ffd3"} Dec 12 14:27:51 crc kubenswrapper[5108]: I1212 14:27:51.530595 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/d9667806-a274-49a5-a527-cd0b9c72cd19-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-67mc2\" (UID: \"d9667806-a274-49a5-a527-cd0b9c72cd19\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-67mc2" Dec 12 14:27:51 crc kubenswrapper[5108]: I1212 14:27:51.556870 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/d9667806-a274-49a5-a527-cd0b9c72cd19-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-67mc2\" (UID: \"d9667806-a274-49a5-a527-cd0b9c72cd19\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-67mc2" Dec 12 14:27:51 crc kubenswrapper[5108]: I1212 14:27:51.728552 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-67mc2" Dec 12 14:27:53 crc kubenswrapper[5108]: I1212 14:27:53.447726 5108 ???:1] "http: TLS handshake error from 192.168.126.11:50826: no serving certificate available for the kubelet" Dec 12 14:27:53 crc kubenswrapper[5108]: I1212 14:27:53.618283 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-kqsjb"] Dec 12 14:27:53 crc kubenswrapper[5108]: I1212 14:27:53.627022 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-kqsjb" Dec 12 14:27:53 crc kubenswrapper[5108]: I1212 14:27:53.641982 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-cloud1-sens-meter-proxy-tls\"" Dec 12 14:27:53 crc kubenswrapper[5108]: I1212 14:27:53.642552 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-cloud1-sens-meter-sg-core-configmap\"" Dec 12 14:27:53 crc kubenswrapper[5108]: I1212 14:27:53.648381 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-kqsjb"] Dec 12 14:27:53 crc kubenswrapper[5108]: I1212 14:27:53.743915 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/37df5dc8-0b91-4cf7-91ba-d2c90b9cce16-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-kqsjb\" (UID: \"37df5dc8-0b91-4cf7-91ba-d2c90b9cce16\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-kqsjb" Dec 12 14:27:53 crc kubenswrapper[5108]: I1212 14:27:53.743998 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/37df5dc8-0b91-4cf7-91ba-d2c90b9cce16-session-secret\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-kqsjb\" (UID: \"37df5dc8-0b91-4cf7-91ba-d2c90b9cce16\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-kqsjb" Dec 12 14:27:53 crc kubenswrapper[5108]: I1212 14:27:53.744028 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/37df5dc8-0b91-4cf7-91ba-d2c90b9cce16-sg-core-config\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-kqsjb\" (UID: \"37df5dc8-0b91-4cf7-91ba-d2c90b9cce16\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-kqsjb" Dec 12 14:27:53 crc kubenswrapper[5108]: I1212 14:27:53.744089 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8cb8c\" (UniqueName: \"kubernetes.io/projected/37df5dc8-0b91-4cf7-91ba-d2c90b9cce16-kube-api-access-8cb8c\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-kqsjb\" (UID: \"37df5dc8-0b91-4cf7-91ba-d2c90b9cce16\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-kqsjb" Dec 12 14:27:53 crc kubenswrapper[5108]: I1212 14:27:53.744135 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/37df5dc8-0b91-4cf7-91ba-d2c90b9cce16-socket-dir\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-kqsjb\" (UID: \"37df5dc8-0b91-4cf7-91ba-d2c90b9cce16\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-kqsjb" Dec 12 14:27:53 crc kubenswrapper[5108]: I1212 14:27:53.849171 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8cb8c\" (UniqueName: \"kubernetes.io/projected/37df5dc8-0b91-4cf7-91ba-d2c90b9cce16-kube-api-access-8cb8c\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-kqsjb\" (UID: \"37df5dc8-0b91-4cf7-91ba-d2c90b9cce16\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-kqsjb" Dec 12 14:27:53 crc kubenswrapper[5108]: I1212 14:27:53.849365 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/37df5dc8-0b91-4cf7-91ba-d2c90b9cce16-socket-dir\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-kqsjb\" (UID: \"37df5dc8-0b91-4cf7-91ba-d2c90b9cce16\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-kqsjb" Dec 12 14:27:53 crc kubenswrapper[5108]: I1212 14:27:53.850227 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/37df5dc8-0b91-4cf7-91ba-d2c90b9cce16-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-kqsjb\" (UID: \"37df5dc8-0b91-4cf7-91ba-d2c90b9cce16\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-kqsjb" Dec 12 14:27:53 crc kubenswrapper[5108]: I1212 14:27:53.850460 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/37df5dc8-0b91-4cf7-91ba-d2c90b9cce16-session-secret\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-kqsjb\" (UID: \"37df5dc8-0b91-4cf7-91ba-d2c90b9cce16\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-kqsjb" Dec 12 14:27:53 crc kubenswrapper[5108]: I1212 14:27:53.850526 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/37df5dc8-0b91-4cf7-91ba-d2c90b9cce16-sg-core-config\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-kqsjb\" (UID: \"37df5dc8-0b91-4cf7-91ba-d2c90b9cce16\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-kqsjb" Dec 12 14:27:53 crc kubenswrapper[5108]: E1212 14:27:53.850607 5108 secret.go:189] Couldn't get secret service-telemetry/default-cloud1-sens-meter-proxy-tls: secret "default-cloud1-sens-meter-proxy-tls" not found Dec 12 14:27:53 crc kubenswrapper[5108]: E1212 14:27:53.850682 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/37df5dc8-0b91-4cf7-91ba-d2c90b9cce16-default-cloud1-sens-meter-proxy-tls podName:37df5dc8-0b91-4cf7-91ba-d2c90b9cce16 nodeName:}" failed. No retries permitted until 2025-12-12 14:27:54.350659779 +0000 UTC m=+1027.258650938 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-cloud1-sens-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/37df5dc8-0b91-4cf7-91ba-d2c90b9cce16-default-cloud1-sens-meter-proxy-tls") pod "default-cloud1-sens-meter-smartgateway-66d5b7c5fc-kqsjb" (UID: "37df5dc8-0b91-4cf7-91ba-d2c90b9cce16") : secret "default-cloud1-sens-meter-proxy-tls" not found Dec 12 14:27:53 crc kubenswrapper[5108]: I1212 14:27:53.851706 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/37df5dc8-0b91-4cf7-91ba-d2c90b9cce16-sg-core-config\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-kqsjb\" (UID: \"37df5dc8-0b91-4cf7-91ba-d2c90b9cce16\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-kqsjb" Dec 12 14:27:53 crc kubenswrapper[5108]: I1212 14:27:53.852052 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/37df5dc8-0b91-4cf7-91ba-d2c90b9cce16-socket-dir\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-kqsjb\" (UID: \"37df5dc8-0b91-4cf7-91ba-d2c90b9cce16\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-kqsjb" Dec 12 14:27:53 crc kubenswrapper[5108]: I1212 14:27:53.861126 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/37df5dc8-0b91-4cf7-91ba-d2c90b9cce16-session-secret\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-kqsjb\" (UID: \"37df5dc8-0b91-4cf7-91ba-d2c90b9cce16\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-kqsjb" Dec 12 14:27:53 crc kubenswrapper[5108]: I1212 14:27:53.878173 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8cb8c\" (UniqueName: \"kubernetes.io/projected/37df5dc8-0b91-4cf7-91ba-d2c90b9cce16-kube-api-access-8cb8c\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-kqsjb\" (UID: \"37df5dc8-0b91-4cf7-91ba-d2c90b9cce16\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-kqsjb" Dec 12 14:27:54 crc kubenswrapper[5108]: I1212 14:27:54.370817 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/37df5dc8-0b91-4cf7-91ba-d2c90b9cce16-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-kqsjb\" (UID: \"37df5dc8-0b91-4cf7-91ba-d2c90b9cce16\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-kqsjb" Dec 12 14:27:54 crc kubenswrapper[5108]: E1212 14:27:54.371049 5108 secret.go:189] Couldn't get secret service-telemetry/default-cloud1-sens-meter-proxy-tls: secret "default-cloud1-sens-meter-proxy-tls" not found Dec 12 14:27:54 crc kubenswrapper[5108]: E1212 14:27:54.371373 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/37df5dc8-0b91-4cf7-91ba-d2c90b9cce16-default-cloud1-sens-meter-proxy-tls podName:37df5dc8-0b91-4cf7-91ba-d2c90b9cce16 nodeName:}" failed. No retries permitted until 2025-12-12 14:27:55.371350352 +0000 UTC m=+1028.279341501 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "default-cloud1-sens-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/37df5dc8-0b91-4cf7-91ba-d2c90b9cce16-default-cloud1-sens-meter-proxy-tls") pod "default-cloud1-sens-meter-smartgateway-66d5b7c5fc-kqsjb" (UID: "37df5dc8-0b91-4cf7-91ba-d2c90b9cce16") : secret "default-cloud1-sens-meter-proxy-tls" not found Dec 12 14:27:55 crc kubenswrapper[5108]: I1212 14:27:55.394507 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/37df5dc8-0b91-4cf7-91ba-d2c90b9cce16-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-kqsjb\" (UID: \"37df5dc8-0b91-4cf7-91ba-d2c90b9cce16\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-kqsjb" Dec 12 14:27:55 crc kubenswrapper[5108]: E1212 14:27:55.395044 5108 secret.go:189] Couldn't get secret service-telemetry/default-cloud1-sens-meter-proxy-tls: secret "default-cloud1-sens-meter-proxy-tls" not found Dec 12 14:27:55 crc kubenswrapper[5108]: E1212 14:27:55.395170 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/37df5dc8-0b91-4cf7-91ba-d2c90b9cce16-default-cloud1-sens-meter-proxy-tls podName:37df5dc8-0b91-4cf7-91ba-d2c90b9cce16 nodeName:}" failed. No retries permitted until 2025-12-12 14:27:57.395147792 +0000 UTC m=+1030.303138951 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "default-cloud1-sens-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/37df5dc8-0b91-4cf7-91ba-d2c90b9cce16-default-cloud1-sens-meter-proxy-tls") pod "default-cloud1-sens-meter-smartgateway-66d5b7c5fc-kqsjb" (UID: "37df5dc8-0b91-4cf7-91ba-d2c90b9cce16") : secret "default-cloud1-sens-meter-proxy-tls" not found Dec 12 14:27:55 crc kubenswrapper[5108]: I1212 14:27:55.731479 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-67mc2"] Dec 12 14:27:56 crc kubenswrapper[5108]: I1212 14:27:56.009585 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-67mc2" event={"ID":"d9667806-a274-49a5-a527-cd0b9c72cd19","Type":"ContainerStarted","Data":"058ac399c82496203d2a743e75df20bbca9a532de11c442efcd1c4bdf72cbde0"} Dec 12 14:27:56 crc kubenswrapper[5108]: I1212 14:27:56.013281 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"701c19e2-ba3f-4570-9d96-5399d9cb9415","Type":"ContainerStarted","Data":"d315816e1f4bfc42bde412cd71cddcc37aaf9bc909cf48202775cb710c5fbc5a"} Dec 12 14:27:56 crc kubenswrapper[5108]: I1212 14:27:56.015094 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-l54mr" event={"ID":"645b4830-437b-47e6-bafc-73f709b51ece","Type":"ContainerStarted","Data":"6fdc19f719ee9b9dfe1d100ecb392cb46614f60feec32315426bc6624e4bdc62"} Dec 12 14:27:56 crc kubenswrapper[5108]: I1212 14:27:56.040806 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/prometheus-default-0" podStartSLOduration=6.205260714 podStartE2EDuration="44.040784986s" podCreationTimestamp="2025-12-12 14:27:12 +0000 UTC" firstStartedPulling="2025-12-12 14:27:17.323464526 +0000 UTC m=+990.231455685" lastFinishedPulling="2025-12-12 14:27:55.158988798 +0000 UTC m=+1028.066979957" observedRunningTime="2025-12-12 14:27:56.034918238 +0000 UTC m=+1028.942909407" watchObservedRunningTime="2025-12-12 14:27:56.040784986 +0000 UTC m=+1028.948776145" Dec 12 14:27:57 crc kubenswrapper[5108]: I1212 14:27:57.037325 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="service-telemetry/prometheus-default-0" Dec 12 14:27:57 crc kubenswrapper[5108]: I1212 14:27:57.440500 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/37df5dc8-0b91-4cf7-91ba-d2c90b9cce16-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-kqsjb\" (UID: \"37df5dc8-0b91-4cf7-91ba-d2c90b9cce16\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-kqsjb" Dec 12 14:27:57 crc kubenswrapper[5108]: I1212 14:27:57.449831 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/37df5dc8-0b91-4cf7-91ba-d2c90b9cce16-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-kqsjb\" (UID: \"37df5dc8-0b91-4cf7-91ba-d2c90b9cce16\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-kqsjb" Dec 12 14:27:57 crc kubenswrapper[5108]: I1212 14:27:57.565172 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-kqsjb" Dec 12 14:27:58 crc kubenswrapper[5108]: I1212 14:27:58.263133 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-kqsjb"] Dec 12 14:27:58 crc kubenswrapper[5108]: W1212 14:27:58.282155 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod37df5dc8_0b91_4cf7_91ba_d2c90b9cce16.slice/crio-6c8bc0167be0e0922b8b6be953987856313c977703768e831d22531b985fbc4d WatchSource:0}: Error finding container 6c8bc0167be0e0922b8b6be953987856313c977703768e831d22531b985fbc4d: Status 404 returned error can't find the container with id 6c8bc0167be0e0922b8b6be953987856313c977703768e831d22531b985fbc4d Dec 12 14:27:59 crc kubenswrapper[5108]: I1212 14:27:59.039976 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-67mc2" event={"ID":"d9667806-a274-49a5-a527-cd0b9c72cd19","Type":"ContainerStarted","Data":"f55c749e3f89b0bc073d1494dcd6eb37437bef1602a55f15d286626809680903"} Dec 12 14:27:59 crc kubenswrapper[5108]: I1212 14:27:59.040864 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-kqsjb" event={"ID":"37df5dc8-0b91-4cf7-91ba-d2c90b9cce16","Type":"ContainerStarted","Data":"6c8bc0167be0e0922b8b6be953987856313c977703768e831d22531b985fbc4d"} Dec 12 14:27:59 crc kubenswrapper[5108]: I1212 14:27:59.042522 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"84895b60-9297-4ab7-a635-b35d55ff1b34","Type":"ContainerStarted","Data":"51c87f16d65566c9a8630d56818559b12aedf24dc5034e6cea6e7813c37037e3"} Dec 12 14:28:00 crc kubenswrapper[5108]: I1212 14:28:00.055875 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-kqsjb" event={"ID":"37df5dc8-0b91-4cf7-91ba-d2c90b9cce16","Type":"ContainerStarted","Data":"bd687418a8b4d3d23c036785d360b1aa5cd9e7775d74e233cbc07ba4f9fb6f43"} Dec 12 14:28:01 crc kubenswrapper[5108]: I1212 14:28:01.148335 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"84895b60-9297-4ab7-a635-b35d55ff1b34","Type":"ContainerStarted","Data":"a4a3b16242ec1924babf9368e182d4b2adc31c3ed19531f1cbd957b6437f1f11"} Dec 12 14:28:02 crc kubenswrapper[5108]: I1212 14:28:02.039401 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="service-telemetry/prometheus-default-0" Dec 12 14:28:02 crc kubenswrapper[5108]: I1212 14:28:02.096213 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="service-telemetry/prometheus-default-0" Dec 12 14:28:02 crc kubenswrapper[5108]: I1212 14:28:02.192688 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/prometheus-default-0" Dec 12 14:28:02 crc kubenswrapper[5108]: I1212 14:28:02.331257 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-coll-event-smartgateway-845b488456-nh7vw"] Dec 12 14:28:03 crc kubenswrapper[5108]: I1212 14:28:03.619966 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-event-smartgateway-845b488456-nh7vw"] Dec 12 14:28:03 crc kubenswrapper[5108]: I1212 14:28:03.620153 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-event-smartgateway-845b488456-nh7vw" Dec 12 14:28:03 crc kubenswrapper[5108]: I1212 14:28:03.624415 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-cert\"" Dec 12 14:28:03 crc kubenswrapper[5108]: I1212 14:28:03.624908 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-cloud1-coll-event-sg-core-configmap\"" Dec 12 14:28:03 crc kubenswrapper[5108]: I1212 14:28:03.630197 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpwz7\" (UniqueName: \"kubernetes.io/projected/2d417cbc-3913-4f81-91fb-3cac9ac3cf33-kube-api-access-gpwz7\") pod \"default-cloud1-coll-event-smartgateway-845b488456-nh7vw\" (UID: \"2d417cbc-3913-4f81-91fb-3cac9ac3cf33\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-845b488456-nh7vw" Dec 12 14:28:03 crc kubenswrapper[5108]: I1212 14:28:03.630302 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/2d417cbc-3913-4f81-91fb-3cac9ac3cf33-sg-core-config\") pod \"default-cloud1-coll-event-smartgateway-845b488456-nh7vw\" (UID: \"2d417cbc-3913-4f81-91fb-3cac9ac3cf33\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-845b488456-nh7vw" Dec 12 14:28:03 crc kubenswrapper[5108]: I1212 14:28:03.630530 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/2d417cbc-3913-4f81-91fb-3cac9ac3cf33-elastic-certs\") pod \"default-cloud1-coll-event-smartgateway-845b488456-nh7vw\" (UID: \"2d417cbc-3913-4f81-91fb-3cac9ac3cf33\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-845b488456-nh7vw" Dec 12 14:28:03 crc kubenswrapper[5108]: I1212 14:28:03.630624 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/2d417cbc-3913-4f81-91fb-3cac9ac3cf33-socket-dir\") pod \"default-cloud1-coll-event-smartgateway-845b488456-nh7vw\" (UID: \"2d417cbc-3913-4f81-91fb-3cac9ac3cf33\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-845b488456-nh7vw" Dec 12 14:28:03 crc kubenswrapper[5108]: I1212 14:28:03.731626 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gpwz7\" (UniqueName: \"kubernetes.io/projected/2d417cbc-3913-4f81-91fb-3cac9ac3cf33-kube-api-access-gpwz7\") pod \"default-cloud1-coll-event-smartgateway-845b488456-nh7vw\" (UID: \"2d417cbc-3913-4f81-91fb-3cac9ac3cf33\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-845b488456-nh7vw" Dec 12 14:28:03 crc kubenswrapper[5108]: I1212 14:28:03.733015 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/2d417cbc-3913-4f81-91fb-3cac9ac3cf33-sg-core-config\") pod \"default-cloud1-coll-event-smartgateway-845b488456-nh7vw\" (UID: \"2d417cbc-3913-4f81-91fb-3cac9ac3cf33\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-845b488456-nh7vw" Dec 12 14:28:03 crc kubenswrapper[5108]: I1212 14:28:03.734956 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/2d417cbc-3913-4f81-91fb-3cac9ac3cf33-sg-core-config\") pod \"default-cloud1-coll-event-smartgateway-845b488456-nh7vw\" (UID: \"2d417cbc-3913-4f81-91fb-3cac9ac3cf33\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-845b488456-nh7vw" Dec 12 14:28:03 crc kubenswrapper[5108]: I1212 14:28:03.735116 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/2d417cbc-3913-4f81-91fb-3cac9ac3cf33-elastic-certs\") pod \"default-cloud1-coll-event-smartgateway-845b488456-nh7vw\" (UID: \"2d417cbc-3913-4f81-91fb-3cac9ac3cf33\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-845b488456-nh7vw" Dec 12 14:28:03 crc kubenswrapper[5108]: I1212 14:28:03.735158 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/2d417cbc-3913-4f81-91fb-3cac9ac3cf33-socket-dir\") pod \"default-cloud1-coll-event-smartgateway-845b488456-nh7vw\" (UID: \"2d417cbc-3913-4f81-91fb-3cac9ac3cf33\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-845b488456-nh7vw" Dec 12 14:28:03 crc kubenswrapper[5108]: I1212 14:28:03.735482 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/2d417cbc-3913-4f81-91fb-3cac9ac3cf33-socket-dir\") pod \"default-cloud1-coll-event-smartgateway-845b488456-nh7vw\" (UID: \"2d417cbc-3913-4f81-91fb-3cac9ac3cf33\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-845b488456-nh7vw" Dec 12 14:28:03 crc kubenswrapper[5108]: I1212 14:28:03.744462 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/2d417cbc-3913-4f81-91fb-3cac9ac3cf33-elastic-certs\") pod \"default-cloud1-coll-event-smartgateway-845b488456-nh7vw\" (UID: \"2d417cbc-3913-4f81-91fb-3cac9ac3cf33\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-845b488456-nh7vw" Dec 12 14:28:03 crc kubenswrapper[5108]: I1212 14:28:03.760752 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gpwz7\" (UniqueName: \"kubernetes.io/projected/2d417cbc-3913-4f81-91fb-3cac9ac3cf33-kube-api-access-gpwz7\") pod \"default-cloud1-coll-event-smartgateway-845b488456-nh7vw\" (UID: \"2d417cbc-3913-4f81-91fb-3cac9ac3cf33\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-845b488456-nh7vw" Dec 12 14:28:03 crc kubenswrapper[5108]: I1212 14:28:03.946191 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-event-smartgateway-845b488456-nh7vw" Dec 12 14:28:04 crc kubenswrapper[5108]: I1212 14:28:04.090742 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-ceil-event-smartgateway-7f898d76fc-wqkfp"] Dec 12 14:28:04 crc kubenswrapper[5108]: I1212 14:28:04.601783 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-event-smartgateway-7f898d76fc-wqkfp"] Dec 12 14:28:04 crc kubenswrapper[5108]: I1212 14:28:04.602163 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7f898d76fc-wqkfp" Dec 12 14:28:04 crc kubenswrapper[5108]: I1212 14:28:04.606192 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-cloud1-ceil-event-sg-core-configmap\"" Dec 12 14:28:04 crc kubenswrapper[5108]: I1212 14:28:04.647656 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdf5h\" (UniqueName: \"kubernetes.io/projected/34f718c9-3742-4c20-9212-66901e612e32-kube-api-access-sdf5h\") pod \"default-cloud1-ceil-event-smartgateway-7f898d76fc-wqkfp\" (UID: \"34f718c9-3742-4c20-9212-66901e612e32\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7f898d76fc-wqkfp" Dec 12 14:28:04 crc kubenswrapper[5108]: I1212 14:28:04.647951 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/34f718c9-3742-4c20-9212-66901e612e32-elastic-certs\") pod \"default-cloud1-ceil-event-smartgateway-7f898d76fc-wqkfp\" (UID: \"34f718c9-3742-4c20-9212-66901e612e32\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7f898d76fc-wqkfp" Dec 12 14:28:04 crc kubenswrapper[5108]: I1212 14:28:04.648016 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/34f718c9-3742-4c20-9212-66901e612e32-sg-core-config\") pod \"default-cloud1-ceil-event-smartgateway-7f898d76fc-wqkfp\" (UID: \"34f718c9-3742-4c20-9212-66901e612e32\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7f898d76fc-wqkfp" Dec 12 14:28:04 crc kubenswrapper[5108]: I1212 14:28:04.648067 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/34f718c9-3742-4c20-9212-66901e612e32-socket-dir\") pod \"default-cloud1-ceil-event-smartgateway-7f898d76fc-wqkfp\" (UID: \"34f718c9-3742-4c20-9212-66901e612e32\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7f898d76fc-wqkfp" Dec 12 14:28:04 crc kubenswrapper[5108]: I1212 14:28:04.749053 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-sdf5h\" (UniqueName: \"kubernetes.io/projected/34f718c9-3742-4c20-9212-66901e612e32-kube-api-access-sdf5h\") pod \"default-cloud1-ceil-event-smartgateway-7f898d76fc-wqkfp\" (UID: \"34f718c9-3742-4c20-9212-66901e612e32\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7f898d76fc-wqkfp" Dec 12 14:28:04 crc kubenswrapper[5108]: I1212 14:28:04.749171 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/34f718c9-3742-4c20-9212-66901e612e32-elastic-certs\") pod \"default-cloud1-ceil-event-smartgateway-7f898d76fc-wqkfp\" (UID: \"34f718c9-3742-4c20-9212-66901e612e32\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7f898d76fc-wqkfp" Dec 12 14:28:04 crc kubenswrapper[5108]: I1212 14:28:04.749201 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/34f718c9-3742-4c20-9212-66901e612e32-sg-core-config\") pod \"default-cloud1-ceil-event-smartgateway-7f898d76fc-wqkfp\" (UID: \"34f718c9-3742-4c20-9212-66901e612e32\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7f898d76fc-wqkfp" Dec 12 14:28:04 crc kubenswrapper[5108]: I1212 14:28:04.749230 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/34f718c9-3742-4c20-9212-66901e612e32-socket-dir\") pod \"default-cloud1-ceil-event-smartgateway-7f898d76fc-wqkfp\" (UID: \"34f718c9-3742-4c20-9212-66901e612e32\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7f898d76fc-wqkfp" Dec 12 14:28:04 crc kubenswrapper[5108]: I1212 14:28:04.749760 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/34f718c9-3742-4c20-9212-66901e612e32-socket-dir\") pod \"default-cloud1-ceil-event-smartgateway-7f898d76fc-wqkfp\" (UID: \"34f718c9-3742-4c20-9212-66901e612e32\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7f898d76fc-wqkfp" Dec 12 14:28:04 crc kubenswrapper[5108]: I1212 14:28:04.750592 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/34f718c9-3742-4c20-9212-66901e612e32-sg-core-config\") pod \"default-cloud1-ceil-event-smartgateway-7f898d76fc-wqkfp\" (UID: \"34f718c9-3742-4c20-9212-66901e612e32\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7f898d76fc-wqkfp" Dec 12 14:28:04 crc kubenswrapper[5108]: I1212 14:28:04.760030 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/34f718c9-3742-4c20-9212-66901e612e32-elastic-certs\") pod \"default-cloud1-ceil-event-smartgateway-7f898d76fc-wqkfp\" (UID: \"34f718c9-3742-4c20-9212-66901e612e32\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7f898d76fc-wqkfp" Dec 12 14:28:04 crc kubenswrapper[5108]: I1212 14:28:04.773914 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-sdf5h\" (UniqueName: \"kubernetes.io/projected/34f718c9-3742-4c20-9212-66901e612e32-kube-api-access-sdf5h\") pod \"default-cloud1-ceil-event-smartgateway-7f898d76fc-wqkfp\" (UID: \"34f718c9-3742-4c20-9212-66901e612e32\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7f898d76fc-wqkfp" Dec 12 14:28:04 crc kubenswrapper[5108]: I1212 14:28:04.933363 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7f898d76fc-wqkfp" Dec 12 14:28:06 crc kubenswrapper[5108]: I1212 14:28:06.962135 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-event-smartgateway-7f898d76fc-wqkfp"] Dec 12 14:28:07 crc kubenswrapper[5108]: I1212 14:28:07.128566 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-event-smartgateway-845b488456-nh7vw"] Dec 12 14:28:07 crc kubenswrapper[5108]: W1212 14:28:07.137393 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2d417cbc_3913_4f81_91fb_3cac9ac3cf33.slice/crio-856fd76ffe45db147fa7d39c3f22cafbdc4ddbd7803e3fb1adb6f236240f6ad3 WatchSource:0}: Error finding container 856fd76ffe45db147fa7d39c3f22cafbdc4ddbd7803e3fb1adb6f236240f6ad3: Status 404 returned error can't find the container with id 856fd76ffe45db147fa7d39c3f22cafbdc4ddbd7803e3fb1adb6f236240f6ad3 Dec 12 14:28:07 crc kubenswrapper[5108]: I1212 14:28:07.200129 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-l54mr" event={"ID":"645b4830-437b-47e6-bafc-73f709b51ece","Type":"ContainerStarted","Data":"1d7706ea9a331d9fa952bf514c27f34b402ff82d11e9161ae27a7093b20d00bc"} Dec 12 14:28:07 crc kubenswrapper[5108]: I1212 14:28:07.202741 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-67mc2" event={"ID":"d9667806-a274-49a5-a527-cd0b9c72cd19","Type":"ContainerStarted","Data":"330e3b5cca3777e4ef0aea5d4ac89a0de83a259498b444a69223645eae0fc7a7"} Dec 12 14:28:07 crc kubenswrapper[5108]: I1212 14:28:07.204695 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-kqsjb" event={"ID":"37df5dc8-0b91-4cf7-91ba-d2c90b9cce16","Type":"ContainerStarted","Data":"1e5d58ef1bcf19a1074e9cae27aa56686b7c6a04c556827e1bfb980ded049a6a"} Dec 12 14:28:07 crc kubenswrapper[5108]: I1212 14:28:07.207105 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"84895b60-9297-4ab7-a635-b35d55ff1b34","Type":"ContainerStarted","Data":"46fa9dbb88c2fd975902f969216a87df8417b0116f61818b68f758845a2fcd95"} Dec 12 14:28:07 crc kubenswrapper[5108]: I1212 14:28:07.208614 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-845b488456-nh7vw" event={"ID":"2d417cbc-3913-4f81-91fb-3cac9ac3cf33","Type":"ContainerStarted","Data":"856fd76ffe45db147fa7d39c3f22cafbdc4ddbd7803e3fb1adb6f236240f6ad3"} Dec 12 14:28:07 crc kubenswrapper[5108]: I1212 14:28:07.209842 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7f898d76fc-wqkfp" event={"ID":"34f718c9-3742-4c20-9212-66901e612e32","Type":"ContainerStarted","Data":"386f8a60ebc407e0f88af2125f58f1172174fab41ea9956a57b1bde3aeb6e10a"} Dec 12 14:28:07 crc kubenswrapper[5108]: I1212 14:28:07.234307 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/alertmanager-default-0" podStartSLOduration=25.607589722 podStartE2EDuration="41.234288105s" podCreationTimestamp="2025-12-12 14:27:26 +0000 UTC" firstStartedPulling="2025-12-12 14:27:50.971322441 +0000 UTC m=+1023.879313600" lastFinishedPulling="2025-12-12 14:28:06.598020814 +0000 UTC m=+1039.506011983" observedRunningTime="2025-12-12 14:28:07.232207859 +0000 UTC m=+1040.140199048" watchObservedRunningTime="2025-12-12 14:28:07.234288105 +0000 UTC m=+1040.142279264" Dec 12 14:28:08 crc kubenswrapper[5108]: I1212 14:28:08.221710 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-845b488456-nh7vw" event={"ID":"2d417cbc-3913-4f81-91fb-3cac9ac3cf33","Type":"ContainerStarted","Data":"aec2d74e7b8a70d86da2a1fc169f76df9ba8c31b37d056653e2a0e53525c6960"} Dec 12 14:28:08 crc kubenswrapper[5108]: I1212 14:28:08.225813 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7f898d76fc-wqkfp" event={"ID":"34f718c9-3742-4c20-9212-66901e612e32","Type":"ContainerStarted","Data":"cfcd39888ed36ac77e5ad17299c7cc7825e0d02f4829a8933a643419ee28020f"} Dec 12 14:28:13 crc kubenswrapper[5108]: I1212 14:28:13.960112 5108 ???:1] "http: TLS handshake error from 192.168.126.11:42714: no serving certificate available for the kubelet" Dec 12 14:28:16 crc kubenswrapper[5108]: I1212 14:28:16.334887 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-kqsjb" event={"ID":"37df5dc8-0b91-4cf7-91ba-d2c90b9cce16","Type":"ContainerStarted","Data":"fcbe8a4956065ad3ccbca50c34928478c9ee69e396720c8e8fc48b7c2dc0036f"} Dec 12 14:28:16 crc kubenswrapper[5108]: I1212 14:28:16.337069 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-845b488456-nh7vw" event={"ID":"2d417cbc-3913-4f81-91fb-3cac9ac3cf33","Type":"ContainerStarted","Data":"35b2ccfe4bf42b1a7e901b3dd1a17738b2425591a3efa48fb6ddff85f2762dec"} Dec 12 14:28:16 crc kubenswrapper[5108]: I1212 14:28:16.339278 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7f898d76fc-wqkfp" event={"ID":"34f718c9-3742-4c20-9212-66901e612e32","Type":"ContainerStarted","Data":"7a3f5541cf129b673a1694d7641b7f7fd5f999bfdb052cb14a0f457996c6a924"} Dec 12 14:28:16 crc kubenswrapper[5108]: I1212 14:28:16.342435 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-l54mr" event={"ID":"645b4830-437b-47e6-bafc-73f709b51ece","Type":"ContainerStarted","Data":"7ef01e7fe6fb9870880e9fa33433b5d639732f212f42e179414ab7df2760af28"} Dec 12 14:28:16 crc kubenswrapper[5108]: I1212 14:28:16.344399 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-67mc2" event={"ID":"d9667806-a274-49a5-a527-cd0b9c72cd19","Type":"ContainerStarted","Data":"a2414c9955537c732a8a5a8fe11755cc3156f8d1b04fa1258dd69b26c18eb229"} Dec 12 14:28:16 crc kubenswrapper[5108]: I1212 14:28:16.368535 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-kqsjb" podStartSLOduration=5.709569991 podStartE2EDuration="23.368516639s" podCreationTimestamp="2025-12-12 14:27:53 +0000 UTC" firstStartedPulling="2025-12-12 14:27:58.283263526 +0000 UTC m=+1031.191254685" lastFinishedPulling="2025-12-12 14:28:15.942210174 +0000 UTC m=+1048.850201333" observedRunningTime="2025-12-12 14:28:16.350996086 +0000 UTC m=+1049.258987255" watchObservedRunningTime="2025-12-12 14:28:16.368516639 +0000 UTC m=+1049.276507798" Dec 12 14:28:16 crc kubenswrapper[5108]: I1212 14:28:16.388530 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7f898d76fc-wqkfp" podStartSLOduration=3.481356002 podStartE2EDuration="12.388506638s" podCreationTimestamp="2025-12-12 14:28:04 +0000 UTC" firstStartedPulling="2025-12-12 14:28:06.971839442 +0000 UTC m=+1039.879830601" lastFinishedPulling="2025-12-12 14:28:15.878990078 +0000 UTC m=+1048.786981237" observedRunningTime="2025-12-12 14:28:16.387577704 +0000 UTC m=+1049.295568863" watchObservedRunningTime="2025-12-12 14:28:16.388506638 +0000 UTC m=+1049.296497797" Dec 12 14:28:16 crc kubenswrapper[5108]: I1212 14:28:16.403615 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-coll-event-smartgateway-845b488456-nh7vw" podStartSLOduration=5.58276303 podStartE2EDuration="14.403594145s" podCreationTimestamp="2025-12-12 14:28:02 +0000 UTC" firstStartedPulling="2025-12-12 14:28:07.13923946 +0000 UTC m=+1040.047230619" lastFinishedPulling="2025-12-12 14:28:15.960070575 +0000 UTC m=+1048.868061734" observedRunningTime="2025-12-12 14:28:16.402433184 +0000 UTC m=+1049.310424353" watchObservedRunningTime="2025-12-12 14:28:16.403594145 +0000 UTC m=+1049.311585304" Dec 12 14:28:16 crc kubenswrapper[5108]: I1212 14:28:16.425354 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-67mc2" podStartSLOduration=7.185317577 podStartE2EDuration="27.425333492s" podCreationTimestamp="2025-12-12 14:27:49 +0000 UTC" firstStartedPulling="2025-12-12 14:27:55.750572734 +0000 UTC m=+1028.658563893" lastFinishedPulling="2025-12-12 14:28:15.990588649 +0000 UTC m=+1048.898579808" observedRunningTime="2025-12-12 14:28:16.420711037 +0000 UTC m=+1049.328702206" watchObservedRunningTime="2025-12-12 14:28:16.425333492 +0000 UTC m=+1049.333324651" Dec 12 14:28:19 crc kubenswrapper[5108]: I1212 14:28:19.280116 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-l54mr" podStartSLOduration=6.112339877 podStartE2EDuration="34.280065416s" podCreationTimestamp="2025-12-12 14:27:45 +0000 UTC" firstStartedPulling="2025-12-12 14:27:47.689321977 +0000 UTC m=+1020.597313136" lastFinishedPulling="2025-12-12 14:28:15.857047516 +0000 UTC m=+1048.765038675" observedRunningTime="2025-12-12 14:28:16.442739982 +0000 UTC m=+1049.350731171" watchObservedRunningTime="2025-12-12 14:28:19.280065416 +0000 UTC m=+1052.188056575" Dec 12 14:28:19 crc kubenswrapper[5108]: I1212 14:28:19.286198 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-wlh9t"] Dec 12 14:28:19 crc kubenswrapper[5108]: I1212 14:28:19.286611 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/default-interconnect-55bf8d5cb-wlh9t" podUID="a5304ef7-fb2d-463b-8f63-b765183d5e64" containerName="default-interconnect" containerID="cri-o://79a578d9c071bea253a1aaae51da1e0e557dcc901fd0374acde6fcd99e6724de" gracePeriod=30 Dec 12 14:28:20 crc kubenswrapper[5108]: E1212 14:28:20.029152 5108 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod645b4830_437b_47e6_bafc_73f709b51ece.slice/crio-1d7706ea9a331d9fa952bf514c27f34b402ff82d11e9161ae27a7093b20d00bc.scope\": RecentStats: unable to find data in memory cache]" Dec 12 14:28:20 crc kubenswrapper[5108]: I1212 14:28:20.374229 5108 generic.go:358] "Generic (PLEG): container finished" podID="a5304ef7-fb2d-463b-8f63-b765183d5e64" containerID="79a578d9c071bea253a1aaae51da1e0e557dcc901fd0374acde6fcd99e6724de" exitCode=0 Dec 12 14:28:20 crc kubenswrapper[5108]: I1212 14:28:20.374336 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-wlh9t" event={"ID":"a5304ef7-fb2d-463b-8f63-b765183d5e64","Type":"ContainerDied","Data":"79a578d9c071bea253a1aaae51da1e0e557dcc901fd0374acde6fcd99e6724de"} Dec 12 14:28:21 crc kubenswrapper[5108]: I1212 14:28:21.382809 5108 generic.go:358] "Generic (PLEG): container finished" podID="37df5dc8-0b91-4cf7-91ba-d2c90b9cce16" containerID="1e5d58ef1bcf19a1074e9cae27aa56686b7c6a04c556827e1bfb980ded049a6a" exitCode=0 Dec 12 14:28:21 crc kubenswrapper[5108]: I1212 14:28:21.382897 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-kqsjb" event={"ID":"37df5dc8-0b91-4cf7-91ba-d2c90b9cce16","Type":"ContainerDied","Data":"1e5d58ef1bcf19a1074e9cae27aa56686b7c6a04c556827e1bfb980ded049a6a"} Dec 12 14:28:21 crc kubenswrapper[5108]: I1212 14:28:21.383547 5108 scope.go:117] "RemoveContainer" containerID="1e5d58ef1bcf19a1074e9cae27aa56686b7c6a04c556827e1bfb980ded049a6a" Dec 12 14:28:21 crc kubenswrapper[5108]: I1212 14:28:21.386111 5108 generic.go:358] "Generic (PLEG): container finished" podID="2d417cbc-3913-4f81-91fb-3cac9ac3cf33" containerID="aec2d74e7b8a70d86da2a1fc169f76df9ba8c31b37d056653e2a0e53525c6960" exitCode=0 Dec 12 14:28:21 crc kubenswrapper[5108]: I1212 14:28:21.386178 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-845b488456-nh7vw" event={"ID":"2d417cbc-3913-4f81-91fb-3cac9ac3cf33","Type":"ContainerDied","Data":"aec2d74e7b8a70d86da2a1fc169f76df9ba8c31b37d056653e2a0e53525c6960"} Dec 12 14:28:21 crc kubenswrapper[5108]: I1212 14:28:21.386672 5108 scope.go:117] "RemoveContainer" containerID="aec2d74e7b8a70d86da2a1fc169f76df9ba8c31b37d056653e2a0e53525c6960" Dec 12 14:28:21 crc kubenswrapper[5108]: I1212 14:28:21.391395 5108 generic.go:358] "Generic (PLEG): container finished" podID="34f718c9-3742-4c20-9212-66901e612e32" containerID="cfcd39888ed36ac77e5ad17299c7cc7825e0d02f4829a8933a643419ee28020f" exitCode=0 Dec 12 14:28:21 crc kubenswrapper[5108]: I1212 14:28:21.391610 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7f898d76fc-wqkfp" event={"ID":"34f718c9-3742-4c20-9212-66901e612e32","Type":"ContainerDied","Data":"cfcd39888ed36ac77e5ad17299c7cc7825e0d02f4829a8933a643419ee28020f"} Dec 12 14:28:21 crc kubenswrapper[5108]: I1212 14:28:21.392015 5108 scope.go:117] "RemoveContainer" containerID="cfcd39888ed36ac77e5ad17299c7cc7825e0d02f4829a8933a643419ee28020f" Dec 12 14:28:22 crc kubenswrapper[5108]: I1212 14:28:22.403521 5108 generic.go:358] "Generic (PLEG): container finished" podID="d9667806-a274-49a5-a527-cd0b9c72cd19" containerID="330e3b5cca3777e4ef0aea5d4ac89a0de83a259498b444a69223645eae0fc7a7" exitCode=0 Dec 12 14:28:22 crc kubenswrapper[5108]: I1212 14:28:22.403727 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-67mc2" event={"ID":"d9667806-a274-49a5-a527-cd0b9c72cd19","Type":"ContainerDied","Data":"330e3b5cca3777e4ef0aea5d4ac89a0de83a259498b444a69223645eae0fc7a7"} Dec 12 14:28:22 crc kubenswrapper[5108]: I1212 14:28:22.404743 5108 scope.go:117] "RemoveContainer" containerID="330e3b5cca3777e4ef0aea5d4ac89a0de83a259498b444a69223645eae0fc7a7" Dec 12 14:28:22 crc kubenswrapper[5108]: I1212 14:28:22.407467 5108 generic.go:358] "Generic (PLEG): container finished" podID="645b4830-437b-47e6-bafc-73f709b51ece" containerID="1d7706ea9a331d9fa952bf514c27f34b402ff82d11e9161ae27a7093b20d00bc" exitCode=0 Dec 12 14:28:22 crc kubenswrapper[5108]: I1212 14:28:22.407622 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-l54mr" event={"ID":"645b4830-437b-47e6-bafc-73f709b51ece","Type":"ContainerDied","Data":"1d7706ea9a331d9fa952bf514c27f34b402ff82d11e9161ae27a7093b20d00bc"} Dec 12 14:28:22 crc kubenswrapper[5108]: I1212 14:28:22.408580 5108 scope.go:117] "RemoveContainer" containerID="1d7706ea9a331d9fa952bf514c27f34b402ff82d11e9161ae27a7093b20d00bc" Dec 12 14:28:22 crc kubenswrapper[5108]: I1212 14:28:22.737118 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-wlh9t" Dec 12 14:28:22 crc kubenswrapper[5108]: I1212 14:28:22.783520 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/a5304ef7-fb2d-463b-8f63-b765183d5e64-default-interconnect-openstack-ca\") pod \"a5304ef7-fb2d-463b-8f63-b765183d5e64\" (UID: \"a5304ef7-fb2d-463b-8f63-b765183d5e64\") " Dec 12 14:28:22 crc kubenswrapper[5108]: I1212 14:28:22.783599 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/a5304ef7-fb2d-463b-8f63-b765183d5e64-default-interconnect-inter-router-credentials\") pod \"a5304ef7-fb2d-463b-8f63-b765183d5e64\" (UID: \"a5304ef7-fb2d-463b-8f63-b765183d5e64\") " Dec 12 14:28:22 crc kubenswrapper[5108]: I1212 14:28:22.783678 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/a5304ef7-fb2d-463b-8f63-b765183d5e64-sasl-users\") pod \"a5304ef7-fb2d-463b-8f63-b765183d5e64\" (UID: \"a5304ef7-fb2d-463b-8f63-b765183d5e64\") " Dec 12 14:28:22 crc kubenswrapper[5108]: I1212 14:28:22.783707 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qlzp7\" (UniqueName: \"kubernetes.io/projected/a5304ef7-fb2d-463b-8f63-b765183d5e64-kube-api-access-qlzp7\") pod \"a5304ef7-fb2d-463b-8f63-b765183d5e64\" (UID: \"a5304ef7-fb2d-463b-8f63-b765183d5e64\") " Dec 12 14:28:22 crc kubenswrapper[5108]: I1212 14:28:22.783766 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/a5304ef7-fb2d-463b-8f63-b765183d5e64-default-interconnect-inter-router-ca\") pod \"a5304ef7-fb2d-463b-8f63-b765183d5e64\" (UID: \"a5304ef7-fb2d-463b-8f63-b765183d5e64\") " Dec 12 14:28:22 crc kubenswrapper[5108]: I1212 14:28:22.783898 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/a5304ef7-fb2d-463b-8f63-b765183d5e64-default-interconnect-openstack-credentials\") pod \"a5304ef7-fb2d-463b-8f63-b765183d5e64\" (UID: \"a5304ef7-fb2d-463b-8f63-b765183d5e64\") " Dec 12 14:28:22 crc kubenswrapper[5108]: I1212 14:28:22.783928 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/a5304ef7-fb2d-463b-8f63-b765183d5e64-sasl-config\") pod \"a5304ef7-fb2d-463b-8f63-b765183d5e64\" (UID: \"a5304ef7-fb2d-463b-8f63-b765183d5e64\") " Dec 12 14:28:22 crc kubenswrapper[5108]: I1212 14:28:22.785307 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5304ef7-fb2d-463b-8f63-b765183d5e64-sasl-config" (OuterVolumeSpecName: "sasl-config") pod "a5304ef7-fb2d-463b-8f63-b765183d5e64" (UID: "a5304ef7-fb2d-463b-8f63-b765183d5e64"). InnerVolumeSpecName "sasl-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:28:22 crc kubenswrapper[5108]: I1212 14:28:22.793778 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5304ef7-fb2d-463b-8f63-b765183d5e64-default-interconnect-inter-router-credentials" (OuterVolumeSpecName: "default-interconnect-inter-router-credentials") pod "a5304ef7-fb2d-463b-8f63-b765183d5e64" (UID: "a5304ef7-fb2d-463b-8f63-b765183d5e64"). InnerVolumeSpecName "default-interconnect-inter-router-credentials". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:28:22 crc kubenswrapper[5108]: I1212 14:28:22.793902 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5304ef7-fb2d-463b-8f63-b765183d5e64-sasl-users" (OuterVolumeSpecName: "sasl-users") pod "a5304ef7-fb2d-463b-8f63-b765183d5e64" (UID: "a5304ef7-fb2d-463b-8f63-b765183d5e64"). InnerVolumeSpecName "sasl-users". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:28:22 crc kubenswrapper[5108]: I1212 14:28:22.794457 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5304ef7-fb2d-463b-8f63-b765183d5e64-default-interconnect-inter-router-ca" (OuterVolumeSpecName: "default-interconnect-inter-router-ca") pod "a5304ef7-fb2d-463b-8f63-b765183d5e64" (UID: "a5304ef7-fb2d-463b-8f63-b765183d5e64"). InnerVolumeSpecName "default-interconnect-inter-router-ca". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:28:22 crc kubenswrapper[5108]: I1212 14:28:22.799029 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5304ef7-fb2d-463b-8f63-b765183d5e64-kube-api-access-qlzp7" (OuterVolumeSpecName: "kube-api-access-qlzp7") pod "a5304ef7-fb2d-463b-8f63-b765183d5e64" (UID: "a5304ef7-fb2d-463b-8f63-b765183d5e64"). InnerVolumeSpecName "kube-api-access-qlzp7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:28:22 crc kubenswrapper[5108]: I1212 14:28:22.799723 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5304ef7-fb2d-463b-8f63-b765183d5e64-default-interconnect-openstack-ca" (OuterVolumeSpecName: "default-interconnect-openstack-ca") pod "a5304ef7-fb2d-463b-8f63-b765183d5e64" (UID: "a5304ef7-fb2d-463b-8f63-b765183d5e64"). InnerVolumeSpecName "default-interconnect-openstack-ca". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:28:22 crc kubenswrapper[5108]: I1212 14:28:22.801805 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5304ef7-fb2d-463b-8f63-b765183d5e64-default-interconnect-openstack-credentials" (OuterVolumeSpecName: "default-interconnect-openstack-credentials") pod "a5304ef7-fb2d-463b-8f63-b765183d5e64" (UID: "a5304ef7-fb2d-463b-8f63-b765183d5e64"). InnerVolumeSpecName "default-interconnect-openstack-credentials". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:28:22 crc kubenswrapper[5108]: I1212 14:28:22.820528 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-ll9w8"] Dec 12 14:28:22 crc kubenswrapper[5108]: I1212 14:28:22.821952 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a5304ef7-fb2d-463b-8f63-b765183d5e64" containerName="default-interconnect" Dec 12 14:28:22 crc kubenswrapper[5108]: I1212 14:28:22.821981 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5304ef7-fb2d-463b-8f63-b765183d5e64" containerName="default-interconnect" Dec 12 14:28:22 crc kubenswrapper[5108]: I1212 14:28:22.822138 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="a5304ef7-fb2d-463b-8f63-b765183d5e64" containerName="default-interconnect" Dec 12 14:28:22 crc kubenswrapper[5108]: I1212 14:28:22.846861 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-ll9w8"] Dec 12 14:28:22 crc kubenswrapper[5108]: I1212 14:28:22.847093 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-ll9w8" Dec 12 14:28:22 crc kubenswrapper[5108]: I1212 14:28:22.885953 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/7bbd5d5c-f473-452c-95ec-0fc513b19235-sasl-config\") pod \"default-interconnect-55bf8d5cb-ll9w8\" (UID: \"7bbd5d5c-f473-452c-95ec-0fc513b19235\") " pod="service-telemetry/default-interconnect-55bf8d5cb-ll9w8" Dec 12 14:28:22 crc kubenswrapper[5108]: I1212 14:28:22.886065 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/7bbd5d5c-f473-452c-95ec-0fc513b19235-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-ll9w8\" (UID: \"7bbd5d5c-f473-452c-95ec-0fc513b19235\") " pod="service-telemetry/default-interconnect-55bf8d5cb-ll9w8" Dec 12 14:28:22 crc kubenswrapper[5108]: I1212 14:28:22.886126 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/7bbd5d5c-f473-452c-95ec-0fc513b19235-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-ll9w8\" (UID: \"7bbd5d5c-f473-452c-95ec-0fc513b19235\") " pod="service-telemetry/default-interconnect-55bf8d5cb-ll9w8" Dec 12 14:28:22 crc kubenswrapper[5108]: I1212 14:28:22.886156 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d65gj\" (UniqueName: \"kubernetes.io/projected/7bbd5d5c-f473-452c-95ec-0fc513b19235-kube-api-access-d65gj\") pod \"default-interconnect-55bf8d5cb-ll9w8\" (UID: \"7bbd5d5c-f473-452c-95ec-0fc513b19235\") " pod="service-telemetry/default-interconnect-55bf8d5cb-ll9w8" Dec 12 14:28:22 crc kubenswrapper[5108]: I1212 14:28:22.886182 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/7bbd5d5c-f473-452c-95ec-0fc513b19235-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-ll9w8\" (UID: \"7bbd5d5c-f473-452c-95ec-0fc513b19235\") " pod="service-telemetry/default-interconnect-55bf8d5cb-ll9w8" Dec 12 14:28:22 crc kubenswrapper[5108]: I1212 14:28:22.886204 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/7bbd5d5c-f473-452c-95ec-0fc513b19235-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-ll9w8\" (UID: \"7bbd5d5c-f473-452c-95ec-0fc513b19235\") " pod="service-telemetry/default-interconnect-55bf8d5cb-ll9w8" Dec 12 14:28:22 crc kubenswrapper[5108]: I1212 14:28:22.886245 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/7bbd5d5c-f473-452c-95ec-0fc513b19235-sasl-users\") pod \"default-interconnect-55bf8d5cb-ll9w8\" (UID: \"7bbd5d5c-f473-452c-95ec-0fc513b19235\") " pod="service-telemetry/default-interconnect-55bf8d5cb-ll9w8" Dec 12 14:28:22 crc kubenswrapper[5108]: I1212 14:28:22.886303 5108 reconciler_common.go:299] "Volume detached for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/a5304ef7-fb2d-463b-8f63-b765183d5e64-default-interconnect-openstack-ca\") on node \"crc\" DevicePath \"\"" Dec 12 14:28:22 crc kubenswrapper[5108]: I1212 14:28:22.886315 5108 reconciler_common.go:299] "Volume detached for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/a5304ef7-fb2d-463b-8f63-b765183d5e64-default-interconnect-inter-router-credentials\") on node \"crc\" DevicePath \"\"" Dec 12 14:28:22 crc kubenswrapper[5108]: I1212 14:28:22.886327 5108 reconciler_common.go:299] "Volume detached for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/a5304ef7-fb2d-463b-8f63-b765183d5e64-sasl-users\") on node \"crc\" DevicePath \"\"" Dec 12 14:28:22 crc kubenswrapper[5108]: I1212 14:28:22.886336 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qlzp7\" (UniqueName: \"kubernetes.io/projected/a5304ef7-fb2d-463b-8f63-b765183d5e64-kube-api-access-qlzp7\") on node \"crc\" DevicePath \"\"" Dec 12 14:28:22 crc kubenswrapper[5108]: I1212 14:28:22.886345 5108 reconciler_common.go:299] "Volume detached for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/a5304ef7-fb2d-463b-8f63-b765183d5e64-default-interconnect-inter-router-ca\") on node \"crc\" DevicePath \"\"" Dec 12 14:28:22 crc kubenswrapper[5108]: I1212 14:28:22.886353 5108 reconciler_common.go:299] "Volume detached for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/a5304ef7-fb2d-463b-8f63-b765183d5e64-default-interconnect-openstack-credentials\") on node \"crc\" DevicePath \"\"" Dec 12 14:28:22 crc kubenswrapper[5108]: I1212 14:28:22.886362 5108 reconciler_common.go:299] "Volume detached for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/a5304ef7-fb2d-463b-8f63-b765183d5e64-sasl-config\") on node \"crc\" DevicePath \"\"" Dec 12 14:28:22 crc kubenswrapper[5108]: I1212 14:28:22.987514 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/7bbd5d5c-f473-452c-95ec-0fc513b19235-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-ll9w8\" (UID: \"7bbd5d5c-f473-452c-95ec-0fc513b19235\") " pod="service-telemetry/default-interconnect-55bf8d5cb-ll9w8" Dec 12 14:28:22 crc kubenswrapper[5108]: I1212 14:28:22.987558 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/7bbd5d5c-f473-452c-95ec-0fc513b19235-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-ll9w8\" (UID: \"7bbd5d5c-f473-452c-95ec-0fc513b19235\") " pod="service-telemetry/default-interconnect-55bf8d5cb-ll9w8" Dec 12 14:28:22 crc kubenswrapper[5108]: I1212 14:28:22.987584 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-d65gj\" (UniqueName: \"kubernetes.io/projected/7bbd5d5c-f473-452c-95ec-0fc513b19235-kube-api-access-d65gj\") pod \"default-interconnect-55bf8d5cb-ll9w8\" (UID: \"7bbd5d5c-f473-452c-95ec-0fc513b19235\") " pod="service-telemetry/default-interconnect-55bf8d5cb-ll9w8" Dec 12 14:28:22 crc kubenswrapper[5108]: I1212 14:28:22.987608 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/7bbd5d5c-f473-452c-95ec-0fc513b19235-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-ll9w8\" (UID: \"7bbd5d5c-f473-452c-95ec-0fc513b19235\") " pod="service-telemetry/default-interconnect-55bf8d5cb-ll9w8" Dec 12 14:28:22 crc kubenswrapper[5108]: I1212 14:28:22.987626 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/7bbd5d5c-f473-452c-95ec-0fc513b19235-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-ll9w8\" (UID: \"7bbd5d5c-f473-452c-95ec-0fc513b19235\") " pod="service-telemetry/default-interconnect-55bf8d5cb-ll9w8" Dec 12 14:28:22 crc kubenswrapper[5108]: I1212 14:28:22.987656 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/7bbd5d5c-f473-452c-95ec-0fc513b19235-sasl-users\") pod \"default-interconnect-55bf8d5cb-ll9w8\" (UID: \"7bbd5d5c-f473-452c-95ec-0fc513b19235\") " pod="service-telemetry/default-interconnect-55bf8d5cb-ll9w8" Dec 12 14:28:22 crc kubenswrapper[5108]: I1212 14:28:22.987690 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/7bbd5d5c-f473-452c-95ec-0fc513b19235-sasl-config\") pod \"default-interconnect-55bf8d5cb-ll9w8\" (UID: \"7bbd5d5c-f473-452c-95ec-0fc513b19235\") " pod="service-telemetry/default-interconnect-55bf8d5cb-ll9w8" Dec 12 14:28:22 crc kubenswrapper[5108]: I1212 14:28:22.988691 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/7bbd5d5c-f473-452c-95ec-0fc513b19235-sasl-config\") pod \"default-interconnect-55bf8d5cb-ll9w8\" (UID: \"7bbd5d5c-f473-452c-95ec-0fc513b19235\") " pod="service-telemetry/default-interconnect-55bf8d5cb-ll9w8" Dec 12 14:28:22 crc kubenswrapper[5108]: I1212 14:28:22.996669 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/7bbd5d5c-f473-452c-95ec-0fc513b19235-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-ll9w8\" (UID: \"7bbd5d5c-f473-452c-95ec-0fc513b19235\") " pod="service-telemetry/default-interconnect-55bf8d5cb-ll9w8" Dec 12 14:28:22 crc kubenswrapper[5108]: I1212 14:28:22.996729 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/7bbd5d5c-f473-452c-95ec-0fc513b19235-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-ll9w8\" (UID: \"7bbd5d5c-f473-452c-95ec-0fc513b19235\") " pod="service-telemetry/default-interconnect-55bf8d5cb-ll9w8" Dec 12 14:28:22 crc kubenswrapper[5108]: I1212 14:28:22.999410 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/7bbd5d5c-f473-452c-95ec-0fc513b19235-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-ll9w8\" (UID: \"7bbd5d5c-f473-452c-95ec-0fc513b19235\") " pod="service-telemetry/default-interconnect-55bf8d5cb-ll9w8" Dec 12 14:28:23 crc kubenswrapper[5108]: I1212 14:28:23.000554 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/7bbd5d5c-f473-452c-95ec-0fc513b19235-sasl-users\") pod \"default-interconnect-55bf8d5cb-ll9w8\" (UID: \"7bbd5d5c-f473-452c-95ec-0fc513b19235\") " pod="service-telemetry/default-interconnect-55bf8d5cb-ll9w8" Dec 12 14:28:23 crc kubenswrapper[5108]: I1212 14:28:23.004258 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/7bbd5d5c-f473-452c-95ec-0fc513b19235-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-ll9w8\" (UID: \"7bbd5d5c-f473-452c-95ec-0fc513b19235\") " pod="service-telemetry/default-interconnect-55bf8d5cb-ll9w8" Dec 12 14:28:23 crc kubenswrapper[5108]: I1212 14:28:23.007906 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-d65gj\" (UniqueName: \"kubernetes.io/projected/7bbd5d5c-f473-452c-95ec-0fc513b19235-kube-api-access-d65gj\") pod \"default-interconnect-55bf8d5cb-ll9w8\" (UID: \"7bbd5d5c-f473-452c-95ec-0fc513b19235\") " pod="service-telemetry/default-interconnect-55bf8d5cb-ll9w8" Dec 12 14:28:23 crc kubenswrapper[5108]: I1212 14:28:23.197273 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-ll9w8" Dec 12 14:28:23 crc kubenswrapper[5108]: I1212 14:28:23.417805 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-67mc2" event={"ID":"d9667806-a274-49a5-a527-cd0b9c72cd19","Type":"ContainerStarted","Data":"4ef62e3790611ac24442a5f588e7ca2b5ba08a44df2403c7574a3d0c37d0d300"} Dec 12 14:28:23 crc kubenswrapper[5108]: I1212 14:28:23.423180 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-kqsjb" event={"ID":"37df5dc8-0b91-4cf7-91ba-d2c90b9cce16","Type":"ContainerStarted","Data":"76323442fd312e9b4bf6bae1343964c08b578c90fec688a467a2647278b4e81c"} Dec 12 14:28:23 crc kubenswrapper[5108]: I1212 14:28:23.426818 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-wlh9t" Dec 12 14:28:23 crc kubenswrapper[5108]: I1212 14:28:23.427051 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-wlh9t" event={"ID":"a5304ef7-fb2d-463b-8f63-b765183d5e64","Type":"ContainerDied","Data":"10010c5bbea5fc2920edc8bfe9a6f36c7ccd622aa589070875a661f7f14c5f6b"} Dec 12 14:28:23 crc kubenswrapper[5108]: I1212 14:28:23.427245 5108 scope.go:117] "RemoveContainer" containerID="79a578d9c071bea253a1aaae51da1e0e557dcc901fd0374acde6fcd99e6724de" Dec 12 14:28:23 crc kubenswrapper[5108]: I1212 14:28:23.454221 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-845b488456-nh7vw" event={"ID":"2d417cbc-3913-4f81-91fb-3cac9ac3cf33","Type":"ContainerStarted","Data":"a9538541e7e7e3e7a0dacc47fcf1d4e197ce1b6d15d916add6227450da4e25c9"} Dec 12 14:28:23 crc kubenswrapper[5108]: I1212 14:28:23.477373 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7f898d76fc-wqkfp" event={"ID":"34f718c9-3742-4c20-9212-66901e612e32","Type":"ContainerStarted","Data":"19e7eb94558705abf217b6ebb531ba83e088378059a9431e25689975288501cd"} Dec 12 14:28:23 crc kubenswrapper[5108]: I1212 14:28:23.480552 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-wlh9t"] Dec 12 14:28:23 crc kubenswrapper[5108]: I1212 14:28:23.487755 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-wlh9t"] Dec 12 14:28:23 crc kubenswrapper[5108]: I1212 14:28:23.507058 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-l54mr" event={"ID":"645b4830-437b-47e6-bafc-73f709b51ece","Type":"ContainerStarted","Data":"9ba942dc005b3d28768ab1548e9b823e4f8548251b5ed68e8211c2d8447d94bb"} Dec 12 14:28:23 crc kubenswrapper[5108]: I1212 14:28:23.861153 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-ll9w8"] Dec 12 14:28:23 crc kubenswrapper[5108]: W1212 14:28:23.873868 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7bbd5d5c_f473_452c_95ec_0fc513b19235.slice/crio-b5bc16206261022042c2e166f6500530c661b1effaad8518d5dc8938f566dc9d WatchSource:0}: Error finding container b5bc16206261022042c2e166f6500530c661b1effaad8518d5dc8938f566dc9d: Status 404 returned error can't find the container with id b5bc16206261022042c2e166f6500530c661b1effaad8518d5dc8938f566dc9d Dec 12 14:28:24 crc kubenswrapper[5108]: I1212 14:28:24.514906 5108 generic.go:358] "Generic (PLEG): container finished" podID="d9667806-a274-49a5-a527-cd0b9c72cd19" containerID="4ef62e3790611ac24442a5f588e7ca2b5ba08a44df2403c7574a3d0c37d0d300" exitCode=0 Dec 12 14:28:24 crc kubenswrapper[5108]: I1212 14:28:24.515003 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-67mc2" event={"ID":"d9667806-a274-49a5-a527-cd0b9c72cd19","Type":"ContainerDied","Data":"4ef62e3790611ac24442a5f588e7ca2b5ba08a44df2403c7574a3d0c37d0d300"} Dec 12 14:28:24 crc kubenswrapper[5108]: I1212 14:28:24.515310 5108 scope.go:117] "RemoveContainer" containerID="330e3b5cca3777e4ef0aea5d4ac89a0de83a259498b444a69223645eae0fc7a7" Dec 12 14:28:24 crc kubenswrapper[5108]: I1212 14:28:24.515745 5108 scope.go:117] "RemoveContainer" containerID="4ef62e3790611ac24442a5f588e7ca2b5ba08a44df2403c7574a3d0c37d0d300" Dec 12 14:28:24 crc kubenswrapper[5108]: E1212 14:28:24.516039 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-ceil-meter-smartgateway-545b564d9f-67mc2_service-telemetry(d9667806-a274-49a5-a527-cd0b9c72cd19)\"" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-67mc2" podUID="d9667806-a274-49a5-a527-cd0b9c72cd19" Dec 12 14:28:24 crc kubenswrapper[5108]: I1212 14:28:24.520493 5108 generic.go:358] "Generic (PLEG): container finished" podID="645b4830-437b-47e6-bafc-73f709b51ece" containerID="9ba942dc005b3d28768ab1548e9b823e4f8548251b5ed68e8211c2d8447d94bb" exitCode=0 Dec 12 14:28:24 crc kubenswrapper[5108]: I1212 14:28:24.520578 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-l54mr" event={"ID":"645b4830-437b-47e6-bafc-73f709b51ece","Type":"ContainerDied","Data":"9ba942dc005b3d28768ab1548e9b823e4f8548251b5ed68e8211c2d8447d94bb"} Dec 12 14:28:24 crc kubenswrapper[5108]: I1212 14:28:24.521039 5108 scope.go:117] "RemoveContainer" containerID="9ba942dc005b3d28768ab1548e9b823e4f8548251b5ed68e8211c2d8447d94bb" Dec 12 14:28:24 crc kubenswrapper[5108]: E1212 14:28:24.521289 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-coll-meter-smartgateway-787645d794-l54mr_service-telemetry(645b4830-437b-47e6-bafc-73f709b51ece)\"" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-l54mr" podUID="645b4830-437b-47e6-bafc-73f709b51ece" Dec 12 14:28:24 crc kubenswrapper[5108]: I1212 14:28:24.522063 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-ll9w8" event={"ID":"7bbd5d5c-f473-452c-95ec-0fc513b19235","Type":"ContainerStarted","Data":"0762eebe76d6d50e21f9947027b710cda8f23b12ec47b5717b12cf67511073f6"} Dec 12 14:28:24 crc kubenswrapper[5108]: I1212 14:28:24.522103 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-ll9w8" event={"ID":"7bbd5d5c-f473-452c-95ec-0fc513b19235","Type":"ContainerStarted","Data":"b5bc16206261022042c2e166f6500530c661b1effaad8518d5dc8938f566dc9d"} Dec 12 14:28:24 crc kubenswrapper[5108]: I1212 14:28:24.569765 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-interconnect-55bf8d5cb-ll9w8" podStartSLOduration=5.569749334 podStartE2EDuration="5.569749334s" podCreationTimestamp="2025-12-12 14:28:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 14:28:24.565560731 +0000 UTC m=+1057.473551890" watchObservedRunningTime="2025-12-12 14:28:24.569749334 +0000 UTC m=+1057.477740493" Dec 12 14:28:24 crc kubenswrapper[5108]: I1212 14:28:24.605601 5108 scope.go:117] "RemoveContainer" containerID="1d7706ea9a331d9fa952bf514c27f34b402ff82d11e9161ae27a7093b20d00bc" Dec 12 14:28:25 crc kubenswrapper[5108]: I1212 14:28:25.414620 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5304ef7-fb2d-463b-8f63-b765183d5e64" path="/var/lib/kubelet/pods/a5304ef7-fb2d-463b-8f63-b765183d5e64/volumes" Dec 12 14:28:25 crc kubenswrapper[5108]: I1212 14:28:25.531589 5108 generic.go:358] "Generic (PLEG): container finished" podID="2d417cbc-3913-4f81-91fb-3cac9ac3cf33" containerID="a9538541e7e7e3e7a0dacc47fcf1d4e197ce1b6d15d916add6227450da4e25c9" exitCode=0 Dec 12 14:28:25 crc kubenswrapper[5108]: I1212 14:28:25.531706 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-845b488456-nh7vw" event={"ID":"2d417cbc-3913-4f81-91fb-3cac9ac3cf33","Type":"ContainerDied","Data":"a9538541e7e7e3e7a0dacc47fcf1d4e197ce1b6d15d916add6227450da4e25c9"} Dec 12 14:28:25 crc kubenswrapper[5108]: I1212 14:28:25.532727 5108 scope.go:117] "RemoveContainer" containerID="aec2d74e7b8a70d86da2a1fc169f76df9ba8c31b37d056653e2a0e53525c6960" Dec 12 14:28:25 crc kubenswrapper[5108]: I1212 14:28:25.533481 5108 scope.go:117] "RemoveContainer" containerID="a9538541e7e7e3e7a0dacc47fcf1d4e197ce1b6d15d916add6227450da4e25c9" Dec 12 14:28:25 crc kubenswrapper[5108]: E1212 14:28:25.533861 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-coll-event-smartgateway-845b488456-nh7vw_service-telemetry(2d417cbc-3913-4f81-91fb-3cac9ac3cf33)\"" pod="service-telemetry/default-cloud1-coll-event-smartgateway-845b488456-nh7vw" podUID="2d417cbc-3913-4f81-91fb-3cac9ac3cf33" Dec 12 14:28:25 crc kubenswrapper[5108]: I1212 14:28:25.535113 5108 generic.go:358] "Generic (PLEG): container finished" podID="34f718c9-3742-4c20-9212-66901e612e32" containerID="19e7eb94558705abf217b6ebb531ba83e088378059a9431e25689975288501cd" exitCode=0 Dec 12 14:28:25 crc kubenswrapper[5108]: I1212 14:28:25.535189 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7f898d76fc-wqkfp" event={"ID":"34f718c9-3742-4c20-9212-66901e612e32","Type":"ContainerDied","Data":"19e7eb94558705abf217b6ebb531ba83e088378059a9431e25689975288501cd"} Dec 12 14:28:25 crc kubenswrapper[5108]: I1212 14:28:25.535523 5108 scope.go:117] "RemoveContainer" containerID="19e7eb94558705abf217b6ebb531ba83e088378059a9431e25689975288501cd" Dec 12 14:28:25 crc kubenswrapper[5108]: E1212 14:28:25.535687 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-ceil-event-smartgateway-7f898d76fc-wqkfp_service-telemetry(34f718c9-3742-4c20-9212-66901e612e32)\"" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7f898d76fc-wqkfp" podUID="34f718c9-3742-4c20-9212-66901e612e32" Dec 12 14:28:25 crc kubenswrapper[5108]: I1212 14:28:25.541339 5108 generic.go:358] "Generic (PLEG): container finished" podID="37df5dc8-0b91-4cf7-91ba-d2c90b9cce16" containerID="76323442fd312e9b4bf6bae1343964c08b578c90fec688a467a2647278b4e81c" exitCode=0 Dec 12 14:28:25 crc kubenswrapper[5108]: I1212 14:28:25.541421 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-kqsjb" event={"ID":"37df5dc8-0b91-4cf7-91ba-d2c90b9cce16","Type":"ContainerDied","Data":"76323442fd312e9b4bf6bae1343964c08b578c90fec688a467a2647278b4e81c"} Dec 12 14:28:25 crc kubenswrapper[5108]: I1212 14:28:25.542214 5108 scope.go:117] "RemoveContainer" containerID="76323442fd312e9b4bf6bae1343964c08b578c90fec688a467a2647278b4e81c" Dec 12 14:28:25 crc kubenswrapper[5108]: E1212 14:28:25.542513 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-sens-meter-smartgateway-66d5b7c5fc-kqsjb_service-telemetry(37df5dc8-0b91-4cf7-91ba-d2c90b9cce16)\"" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-kqsjb" podUID="37df5dc8-0b91-4cf7-91ba-d2c90b9cce16" Dec 12 14:28:26 crc kubenswrapper[5108]: I1212 14:28:26.249703 5108 scope.go:117] "RemoveContainer" containerID="cfcd39888ed36ac77e5ad17299c7cc7825e0d02f4829a8933a643419ee28020f" Dec 12 14:28:26 crc kubenswrapper[5108]: I1212 14:28:26.365302 5108 scope.go:117] "RemoveContainer" containerID="1e5d58ef1bcf19a1074e9cae27aa56686b7c6a04c556827e1bfb980ded049a6a" Dec 12 14:28:26 crc kubenswrapper[5108]: I1212 14:28:26.868265 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/qdr-test"] Dec 12 14:28:27 crc kubenswrapper[5108]: I1212 14:28:27.099206 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/qdr-test"] Dec 12 14:28:27 crc kubenswrapper[5108]: I1212 14:28:27.099399 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/qdr-test" Dec 12 14:28:27 crc kubenswrapper[5108]: I1212 14:28:27.103171 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-selfsigned\"" Dec 12 14:28:27 crc kubenswrapper[5108]: I1212 14:28:27.105445 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"qdr-test-config\"" Dec 12 14:28:27 crc kubenswrapper[5108]: I1212 14:28:27.153772 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"qdr-test-config\" (UniqueName: \"kubernetes.io/configmap/56e26671-a8fe-40e3-a0c5-74be26f28ddc-qdr-test-config\") pod \"qdr-test\" (UID: \"56e26671-a8fe-40e3-a0c5-74be26f28ddc\") " pod="service-telemetry/qdr-test" Dec 12 14:28:27 crc kubenswrapper[5108]: I1212 14:28:27.153843 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxkhx\" (UniqueName: \"kubernetes.io/projected/56e26671-a8fe-40e3-a0c5-74be26f28ddc-kube-api-access-cxkhx\") pod \"qdr-test\" (UID: \"56e26671-a8fe-40e3-a0c5-74be26f28ddc\") " pod="service-telemetry/qdr-test" Dec 12 14:28:27 crc kubenswrapper[5108]: I1212 14:28:27.153895 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-selfsigned-cert\" (UniqueName: \"kubernetes.io/secret/56e26671-a8fe-40e3-a0c5-74be26f28ddc-default-interconnect-selfsigned-cert\") pod \"qdr-test\" (UID: \"56e26671-a8fe-40e3-a0c5-74be26f28ddc\") " pod="service-telemetry/qdr-test" Dec 12 14:28:27 crc kubenswrapper[5108]: I1212 14:28:27.255846 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"qdr-test-config\" (UniqueName: \"kubernetes.io/configmap/56e26671-a8fe-40e3-a0c5-74be26f28ddc-qdr-test-config\") pod \"qdr-test\" (UID: \"56e26671-a8fe-40e3-a0c5-74be26f28ddc\") " pod="service-telemetry/qdr-test" Dec 12 14:28:27 crc kubenswrapper[5108]: I1212 14:28:27.255912 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-cxkhx\" (UniqueName: \"kubernetes.io/projected/56e26671-a8fe-40e3-a0c5-74be26f28ddc-kube-api-access-cxkhx\") pod \"qdr-test\" (UID: \"56e26671-a8fe-40e3-a0c5-74be26f28ddc\") " pod="service-telemetry/qdr-test" Dec 12 14:28:27 crc kubenswrapper[5108]: I1212 14:28:27.255955 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-selfsigned-cert\" (UniqueName: \"kubernetes.io/secret/56e26671-a8fe-40e3-a0c5-74be26f28ddc-default-interconnect-selfsigned-cert\") pod \"qdr-test\" (UID: \"56e26671-a8fe-40e3-a0c5-74be26f28ddc\") " pod="service-telemetry/qdr-test" Dec 12 14:28:27 crc kubenswrapper[5108]: I1212 14:28:27.256976 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"qdr-test-config\" (UniqueName: \"kubernetes.io/configmap/56e26671-a8fe-40e3-a0c5-74be26f28ddc-qdr-test-config\") pod \"qdr-test\" (UID: \"56e26671-a8fe-40e3-a0c5-74be26f28ddc\") " pod="service-telemetry/qdr-test" Dec 12 14:28:27 crc kubenswrapper[5108]: I1212 14:28:27.267754 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-selfsigned-cert\" (UniqueName: \"kubernetes.io/secret/56e26671-a8fe-40e3-a0c5-74be26f28ddc-default-interconnect-selfsigned-cert\") pod \"qdr-test\" (UID: \"56e26671-a8fe-40e3-a0c5-74be26f28ddc\") " pod="service-telemetry/qdr-test" Dec 12 14:28:27 crc kubenswrapper[5108]: I1212 14:28:27.281555 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxkhx\" (UniqueName: \"kubernetes.io/projected/56e26671-a8fe-40e3-a0c5-74be26f28ddc-kube-api-access-cxkhx\") pod \"qdr-test\" (UID: \"56e26671-a8fe-40e3-a0c5-74be26f28ddc\") " pod="service-telemetry/qdr-test" Dec 12 14:28:27 crc kubenswrapper[5108]: I1212 14:28:27.418019 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/qdr-test" Dec 12 14:28:27 crc kubenswrapper[5108]: I1212 14:28:27.853983 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/qdr-test"] Dec 12 14:28:27 crc kubenswrapper[5108]: W1212 14:28:27.860910 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod56e26671_a8fe_40e3_a0c5_74be26f28ddc.slice/crio-1dc3bb1a2eb4e5334b30571627a156298682407ad432a43d4d6f1cb00f9bc84b WatchSource:0}: Error finding container 1dc3bb1a2eb4e5334b30571627a156298682407ad432a43d4d6f1cb00f9bc84b: Status 404 returned error can't find the container with id 1dc3bb1a2eb4e5334b30571627a156298682407ad432a43d4d6f1cb00f9bc84b Dec 12 14:28:28 crc kubenswrapper[5108]: I1212 14:28:28.580945 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/qdr-test" event={"ID":"56e26671-a8fe-40e3-a0c5-74be26f28ddc","Type":"ContainerStarted","Data":"1dc3bb1a2eb4e5334b30571627a156298682407ad432a43d4d6f1cb00f9bc84b"} Dec 12 14:28:35 crc kubenswrapper[5108]: I1212 14:28:35.408274 5108 scope.go:117] "RemoveContainer" containerID="9ba942dc005b3d28768ab1548e9b823e4f8548251b5ed68e8211c2d8447d94bb" Dec 12 14:28:36 crc kubenswrapper[5108]: I1212 14:28:36.407797 5108 scope.go:117] "RemoveContainer" containerID="19e7eb94558705abf217b6ebb531ba83e088378059a9431e25689975288501cd" Dec 12 14:28:36 crc kubenswrapper[5108]: I1212 14:28:36.408107 5108 scope.go:117] "RemoveContainer" containerID="4ef62e3790611ac24442a5f588e7ca2b5ba08a44df2403c7574a3d0c37d0d300" Dec 12 14:28:38 crc kubenswrapper[5108]: I1212 14:28:38.407904 5108 scope.go:117] "RemoveContainer" containerID="76323442fd312e9b4bf6bae1343964c08b578c90fec688a467a2647278b4e81c" Dec 12 14:28:41 crc kubenswrapper[5108]: I1212 14:28:41.407405 5108 scope.go:117] "RemoveContainer" containerID="a9538541e7e7e3e7a0dacc47fcf1d4e197ce1b6d15d916add6227450da4e25c9" Dec 12 14:28:48 crc kubenswrapper[5108]: I1212 14:28:48.742827 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-67mc2" event={"ID":"d9667806-a274-49a5-a527-cd0b9c72cd19","Type":"ContainerStarted","Data":"2c23563fc0f3c84715c178893a6065a5cd5b37ad08fdaa3370748553c08c99c4"} Dec 12 14:28:48 crc kubenswrapper[5108]: I1212 14:28:48.745803 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-kqsjb" event={"ID":"37df5dc8-0b91-4cf7-91ba-d2c90b9cce16","Type":"ContainerStarted","Data":"c46a82947d6e4581a1a9eed4220f21a35a240d590e6925dfc9d1c99a988a20f2"} Dec 12 14:28:48 crc kubenswrapper[5108]: I1212 14:28:48.748216 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/qdr-test" event={"ID":"56e26671-a8fe-40e3-a0c5-74be26f28ddc","Type":"ContainerStarted","Data":"75aa6c88443773f005090c19f6bb3a48f8b466e2525ad1a16e03374a221ccc6c"} Dec 12 14:28:48 crc kubenswrapper[5108]: I1212 14:28:48.751422 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-845b488456-nh7vw" event={"ID":"2d417cbc-3913-4f81-91fb-3cac9ac3cf33","Type":"ContainerStarted","Data":"a5bd96966a5c943042a28c5c2d97e4fe6aa0b7907719d0df1c80f767c3125441"} Dec 12 14:28:48 crc kubenswrapper[5108]: I1212 14:28:48.760568 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7f898d76fc-wqkfp" event={"ID":"34f718c9-3742-4c20-9212-66901e612e32","Type":"ContainerStarted","Data":"1290376c1836c21850d90e0a1254ea1ad85ec32d57d9e831f42352fc8a465bf6"} Dec 12 14:28:48 crc kubenswrapper[5108]: I1212 14:28:48.764155 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-l54mr" event={"ID":"645b4830-437b-47e6-bafc-73f709b51ece","Type":"ContainerStarted","Data":"4a22b6e3924207ba398600d2d8ff49c2dcd381ed8d61df24958bb8c172019c98"} Dec 12 14:28:48 crc kubenswrapper[5108]: I1212 14:28:48.792563 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/qdr-test" podStartSLOduration=2.776604447 podStartE2EDuration="22.792542507s" podCreationTimestamp="2025-12-12 14:28:26 +0000 UTC" firstStartedPulling="2025-12-12 14:28:27.862631571 +0000 UTC m=+1060.770622730" lastFinishedPulling="2025-12-12 14:28:47.878569631 +0000 UTC m=+1080.786560790" observedRunningTime="2025-12-12 14:28:48.785723993 +0000 UTC m=+1081.693715162" watchObservedRunningTime="2025-12-12 14:28:48.792542507 +0000 UTC m=+1081.700533676" Dec 12 14:28:49 crc kubenswrapper[5108]: I1212 14:28:49.169793 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/stf-smoketest-smoke1-b5pdg"] Dec 12 14:28:49 crc kubenswrapper[5108]: I1212 14:28:49.182984 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-b5pdg" Dec 12 14:28:49 crc kubenswrapper[5108]: I1212 14:28:49.183266 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/stf-smoketest-smoke1-b5pdg"] Dec 12 14:28:49 crc kubenswrapper[5108]: I1212 14:28:49.185800 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-healthcheck-log\"" Dec 12 14:28:49 crc kubenswrapper[5108]: I1212 14:28:49.192679 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-sensubility-config\"" Dec 12 14:28:49 crc kubenswrapper[5108]: I1212 14:28:49.192977 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-ceilometer-publisher\"" Dec 12 14:28:49 crc kubenswrapper[5108]: I1212 14:28:49.193133 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-ceilometer-entrypoint-script\"" Dec 12 14:28:49 crc kubenswrapper[5108]: I1212 14:28:49.193231 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-collectd-entrypoint-script\"" Dec 12 14:28:49 crc kubenswrapper[5108]: I1212 14:28:49.193427 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-collectd-config\"" Dec 12 14:28:49 crc kubenswrapper[5108]: I1212 14:28:49.313672 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ww6p\" (UniqueName: \"kubernetes.io/projected/2ae969db-5b43-49f5-b476-49139689b0af-kube-api-access-9ww6p\") pod \"stf-smoketest-smoke1-b5pdg\" (UID: \"2ae969db-5b43-49f5-b476-49139689b0af\") " pod="service-telemetry/stf-smoketest-smoke1-b5pdg" Dec 12 14:28:49 crc kubenswrapper[5108]: I1212 14:28:49.313732 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/2ae969db-5b43-49f5-b476-49139689b0af-sensubility-config\") pod \"stf-smoketest-smoke1-b5pdg\" (UID: \"2ae969db-5b43-49f5-b476-49139689b0af\") " pod="service-telemetry/stf-smoketest-smoke1-b5pdg" Dec 12 14:28:49 crc kubenswrapper[5108]: I1212 14:28:49.313755 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/2ae969db-5b43-49f5-b476-49139689b0af-collectd-config\") pod \"stf-smoketest-smoke1-b5pdg\" (UID: \"2ae969db-5b43-49f5-b476-49139689b0af\") " pod="service-telemetry/stf-smoketest-smoke1-b5pdg" Dec 12 14:28:49 crc kubenswrapper[5108]: I1212 14:28:49.313773 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/2ae969db-5b43-49f5-b476-49139689b0af-ceilometer-publisher\") pod \"stf-smoketest-smoke1-b5pdg\" (UID: \"2ae969db-5b43-49f5-b476-49139689b0af\") " pod="service-telemetry/stf-smoketest-smoke1-b5pdg" Dec 12 14:28:49 crc kubenswrapper[5108]: I1212 14:28:49.313853 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/2ae969db-5b43-49f5-b476-49139689b0af-healthcheck-log\") pod \"stf-smoketest-smoke1-b5pdg\" (UID: \"2ae969db-5b43-49f5-b476-49139689b0af\") " pod="service-telemetry/stf-smoketest-smoke1-b5pdg" Dec 12 14:28:49 crc kubenswrapper[5108]: I1212 14:28:49.314127 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/2ae969db-5b43-49f5-b476-49139689b0af-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-b5pdg\" (UID: \"2ae969db-5b43-49f5-b476-49139689b0af\") " pod="service-telemetry/stf-smoketest-smoke1-b5pdg" Dec 12 14:28:49 crc kubenswrapper[5108]: I1212 14:28:49.314186 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/2ae969db-5b43-49f5-b476-49139689b0af-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-b5pdg\" (UID: \"2ae969db-5b43-49f5-b476-49139689b0af\") " pod="service-telemetry/stf-smoketest-smoke1-b5pdg" Dec 12 14:28:49 crc kubenswrapper[5108]: I1212 14:28:49.415375 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/2ae969db-5b43-49f5-b476-49139689b0af-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-b5pdg\" (UID: \"2ae969db-5b43-49f5-b476-49139689b0af\") " pod="service-telemetry/stf-smoketest-smoke1-b5pdg" Dec 12 14:28:49 crc kubenswrapper[5108]: I1212 14:28:49.415654 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/2ae969db-5b43-49f5-b476-49139689b0af-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-b5pdg\" (UID: \"2ae969db-5b43-49f5-b476-49139689b0af\") " pod="service-telemetry/stf-smoketest-smoke1-b5pdg" Dec 12 14:28:49 crc kubenswrapper[5108]: I1212 14:28:49.415958 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9ww6p\" (UniqueName: \"kubernetes.io/projected/2ae969db-5b43-49f5-b476-49139689b0af-kube-api-access-9ww6p\") pod \"stf-smoketest-smoke1-b5pdg\" (UID: \"2ae969db-5b43-49f5-b476-49139689b0af\") " pod="service-telemetry/stf-smoketest-smoke1-b5pdg" Dec 12 14:28:49 crc kubenswrapper[5108]: I1212 14:28:49.416050 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/2ae969db-5b43-49f5-b476-49139689b0af-sensubility-config\") pod \"stf-smoketest-smoke1-b5pdg\" (UID: \"2ae969db-5b43-49f5-b476-49139689b0af\") " pod="service-telemetry/stf-smoketest-smoke1-b5pdg" Dec 12 14:28:49 crc kubenswrapper[5108]: I1212 14:28:49.416149 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/2ae969db-5b43-49f5-b476-49139689b0af-collectd-config\") pod \"stf-smoketest-smoke1-b5pdg\" (UID: \"2ae969db-5b43-49f5-b476-49139689b0af\") " pod="service-telemetry/stf-smoketest-smoke1-b5pdg" Dec 12 14:28:49 crc kubenswrapper[5108]: I1212 14:28:49.416203 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/2ae969db-5b43-49f5-b476-49139689b0af-ceilometer-publisher\") pod \"stf-smoketest-smoke1-b5pdg\" (UID: \"2ae969db-5b43-49f5-b476-49139689b0af\") " pod="service-telemetry/stf-smoketest-smoke1-b5pdg" Dec 12 14:28:49 crc kubenswrapper[5108]: I1212 14:28:49.416249 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/2ae969db-5b43-49f5-b476-49139689b0af-healthcheck-log\") pod \"stf-smoketest-smoke1-b5pdg\" (UID: \"2ae969db-5b43-49f5-b476-49139689b0af\") " pod="service-telemetry/stf-smoketest-smoke1-b5pdg" Dec 12 14:28:49 crc kubenswrapper[5108]: I1212 14:28:49.416371 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/2ae969db-5b43-49f5-b476-49139689b0af-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-b5pdg\" (UID: \"2ae969db-5b43-49f5-b476-49139689b0af\") " pod="service-telemetry/stf-smoketest-smoke1-b5pdg" Dec 12 14:28:49 crc kubenswrapper[5108]: I1212 14:28:49.417295 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/2ae969db-5b43-49f5-b476-49139689b0af-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-b5pdg\" (UID: \"2ae969db-5b43-49f5-b476-49139689b0af\") " pod="service-telemetry/stf-smoketest-smoke1-b5pdg" Dec 12 14:28:49 crc kubenswrapper[5108]: I1212 14:28:49.417443 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/2ae969db-5b43-49f5-b476-49139689b0af-ceilometer-publisher\") pod \"stf-smoketest-smoke1-b5pdg\" (UID: \"2ae969db-5b43-49f5-b476-49139689b0af\") " pod="service-telemetry/stf-smoketest-smoke1-b5pdg" Dec 12 14:28:49 crc kubenswrapper[5108]: I1212 14:28:49.417471 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/2ae969db-5b43-49f5-b476-49139689b0af-collectd-config\") pod \"stf-smoketest-smoke1-b5pdg\" (UID: \"2ae969db-5b43-49f5-b476-49139689b0af\") " pod="service-telemetry/stf-smoketest-smoke1-b5pdg" Dec 12 14:28:49 crc kubenswrapper[5108]: I1212 14:28:49.417708 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/2ae969db-5b43-49f5-b476-49139689b0af-healthcheck-log\") pod \"stf-smoketest-smoke1-b5pdg\" (UID: \"2ae969db-5b43-49f5-b476-49139689b0af\") " pod="service-telemetry/stf-smoketest-smoke1-b5pdg" Dec 12 14:28:49 crc kubenswrapper[5108]: I1212 14:28:49.417982 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/2ae969db-5b43-49f5-b476-49139689b0af-sensubility-config\") pod \"stf-smoketest-smoke1-b5pdg\" (UID: \"2ae969db-5b43-49f5-b476-49139689b0af\") " pod="service-telemetry/stf-smoketest-smoke1-b5pdg" Dec 12 14:28:49 crc kubenswrapper[5108]: I1212 14:28:49.471237 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9ww6p\" (UniqueName: \"kubernetes.io/projected/2ae969db-5b43-49f5-b476-49139689b0af-kube-api-access-9ww6p\") pod \"stf-smoketest-smoke1-b5pdg\" (UID: \"2ae969db-5b43-49f5-b476-49139689b0af\") " pod="service-telemetry/stf-smoketest-smoke1-b5pdg" Dec 12 14:28:49 crc kubenswrapper[5108]: I1212 14:28:49.504713 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-b5pdg" Dec 12 14:28:49 crc kubenswrapper[5108]: I1212 14:28:49.708000 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/curl"] Dec 12 14:28:49 crc kubenswrapper[5108]: I1212 14:28:49.715979 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Dec 12 14:28:49 crc kubenswrapper[5108]: I1212 14:28:49.725338 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/curl"] Dec 12 14:28:49 crc kubenswrapper[5108]: I1212 14:28:49.822564 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgbcf\" (UniqueName: \"kubernetes.io/projected/214fd660-0ee7-42fd-866e-5bd569e19415-kube-api-access-wgbcf\") pod \"curl\" (UID: \"214fd660-0ee7-42fd-866e-5bd569e19415\") " pod="service-telemetry/curl" Dec 12 14:28:49 crc kubenswrapper[5108]: I1212 14:28:49.960828 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wgbcf\" (UniqueName: \"kubernetes.io/projected/214fd660-0ee7-42fd-866e-5bd569e19415-kube-api-access-wgbcf\") pod \"curl\" (UID: \"214fd660-0ee7-42fd-866e-5bd569e19415\") " pod="service-telemetry/curl" Dec 12 14:28:49 crc kubenswrapper[5108]: I1212 14:28:49.986248 5108 patch_prober.go:28] interesting pod/machine-config-daemon-w294k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 14:28:49 crc kubenswrapper[5108]: I1212 14:28:49.986372 5108 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w294k" podUID="fcb30c12-8b29-461d-ab3e-a76577b664d6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 14:28:50 crc kubenswrapper[5108]: I1212 14:28:50.018378 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wgbcf\" (UniqueName: \"kubernetes.io/projected/214fd660-0ee7-42fd-866e-5bd569e19415-kube-api-access-wgbcf\") pod \"curl\" (UID: \"214fd660-0ee7-42fd-866e-5bd569e19415\") " pod="service-telemetry/curl" Dec 12 14:28:50 crc kubenswrapper[5108]: I1212 14:28:50.054188 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/stf-smoketest-smoke1-b5pdg"] Dec 12 14:28:50 crc kubenswrapper[5108]: I1212 14:28:50.058737 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Dec 12 14:28:50 crc kubenswrapper[5108]: W1212 14:28:50.162981 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2ae969db_5b43_49f5_b476_49139689b0af.slice/crio-8721536d792930d9629943e2488b3617c93623bc840fdf9c36200dd94b30e0f5 WatchSource:0}: Error finding container 8721536d792930d9629943e2488b3617c93623bc840fdf9c36200dd94b30e0f5: Status 404 returned error can't find the container with id 8721536d792930d9629943e2488b3617c93623bc840fdf9c36200dd94b30e0f5 Dec 12 14:28:50 crc kubenswrapper[5108]: I1212 14:28:50.815767 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-b5pdg" event={"ID":"2ae969db-5b43-49f5-b476-49139689b0af","Type":"ContainerStarted","Data":"8721536d792930d9629943e2488b3617c93623bc840fdf9c36200dd94b30e0f5"} Dec 12 14:28:51 crc kubenswrapper[5108]: W1212 14:28:51.050165 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod214fd660_0ee7_42fd_866e_5bd569e19415.slice/crio-12995344c473e1cb4d59e51b0d8c7d71b66041d2b7b84bed96e207825380e9ca WatchSource:0}: Error finding container 12995344c473e1cb4d59e51b0d8c7d71b66041d2b7b84bed96e207825380e9ca: Status 404 returned error can't find the container with id 12995344c473e1cb4d59e51b0d8c7d71b66041d2b7b84bed96e207825380e9ca Dec 12 14:28:51 crc kubenswrapper[5108]: I1212 14:28:51.053869 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/curl"] Dec 12 14:28:51 crc kubenswrapper[5108]: I1212 14:28:51.829793 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"214fd660-0ee7-42fd-866e-5bd569e19415","Type":"ContainerStarted","Data":"12995344c473e1cb4d59e51b0d8c7d71b66041d2b7b84bed96e207825380e9ca"} Dec 12 14:28:53 crc kubenswrapper[5108]: I1212 14:28:53.850992 5108 generic.go:358] "Generic (PLEG): container finished" podID="214fd660-0ee7-42fd-866e-5bd569e19415" containerID="bd4089b3181a47f032f2da99261f6823ea5e37719db61961394f6e2e6f28efdc" exitCode=0 Dec 12 14:28:53 crc kubenswrapper[5108]: I1212 14:28:53.851171 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"214fd660-0ee7-42fd-866e-5bd569e19415","Type":"ContainerDied","Data":"bd4089b3181a47f032f2da99261f6823ea5e37719db61961394f6e2e6f28efdc"} Dec 12 14:28:54 crc kubenswrapper[5108]: I1212 14:28:54.945086 5108 ???:1] "http: TLS handshake error from 192.168.126.11:58346: no serving certificate available for the kubelet" Dec 12 14:29:02 crc kubenswrapper[5108]: I1212 14:29:02.275196 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Dec 12 14:29:02 crc kubenswrapper[5108]: I1212 14:29:02.337205 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wgbcf\" (UniqueName: \"kubernetes.io/projected/214fd660-0ee7-42fd-866e-5bd569e19415-kube-api-access-wgbcf\") pod \"214fd660-0ee7-42fd-866e-5bd569e19415\" (UID: \"214fd660-0ee7-42fd-866e-5bd569e19415\") " Dec 12 14:29:02 crc kubenswrapper[5108]: I1212 14:29:02.383033 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/214fd660-0ee7-42fd-866e-5bd569e19415-kube-api-access-wgbcf" (OuterVolumeSpecName: "kube-api-access-wgbcf") pod "214fd660-0ee7-42fd-866e-5bd569e19415" (UID: "214fd660-0ee7-42fd-866e-5bd569e19415"). InnerVolumeSpecName "kube-api-access-wgbcf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:29:02 crc kubenswrapper[5108]: I1212 14:29:02.439364 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wgbcf\" (UniqueName: \"kubernetes.io/projected/214fd660-0ee7-42fd-866e-5bd569e19415-kube-api-access-wgbcf\") on node \"crc\" DevicePath \"\"" Dec 12 14:29:02 crc kubenswrapper[5108]: I1212 14:29:02.457378 5108 ???:1] "http: TLS handshake error from 192.168.126.11:58348: no serving certificate available for the kubelet" Dec 12 14:29:02 crc kubenswrapper[5108]: I1212 14:29:02.744742 5108 ???:1] "http: TLS handshake error from 192.168.126.11:58362: no serving certificate available for the kubelet" Dec 12 14:29:02 crc kubenswrapper[5108]: I1212 14:29:02.921457 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"214fd660-0ee7-42fd-866e-5bd569e19415","Type":"ContainerDied","Data":"12995344c473e1cb4d59e51b0d8c7d71b66041d2b7b84bed96e207825380e9ca"} Dec 12 14:29:02 crc kubenswrapper[5108]: I1212 14:29:02.921532 5108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="12995344c473e1cb4d59e51b0d8c7d71b66041d2b7b84bed96e207825380e9ca" Dec 12 14:29:02 crc kubenswrapper[5108]: I1212 14:29:02.921561 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Dec 12 14:29:04 crc kubenswrapper[5108]: I1212 14:29:04.935700 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-b5pdg" event={"ID":"2ae969db-5b43-49f5-b476-49139689b0af","Type":"ContainerStarted","Data":"6abde6c1bdd2f6c7440ab663a38d8dbc00aa2b2abfaa618771b6658bbcc082bd"} Dec 12 14:29:12 crc kubenswrapper[5108]: I1212 14:29:12.992541 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-b5pdg" event={"ID":"2ae969db-5b43-49f5-b476-49139689b0af","Type":"ContainerStarted","Data":"25cb8073b3e2d26ffe85cd47e72369dd90d6e9f7db518ad80dffe18a71ccfb99"} Dec 12 14:29:13 crc kubenswrapper[5108]: I1212 14:29:13.017879 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/stf-smoketest-smoke1-b5pdg" podStartSLOduration=1.529140624 podStartE2EDuration="24.01786487s" podCreationTimestamp="2025-12-12 14:28:49 +0000 UTC" firstStartedPulling="2025-12-12 14:28:50.177368141 +0000 UTC m=+1083.085359300" lastFinishedPulling="2025-12-12 14:29:12.666092387 +0000 UTC m=+1105.574083546" observedRunningTime="2025-12-12 14:29:13.014889259 +0000 UTC m=+1105.922880408" watchObservedRunningTime="2025-12-12 14:29:13.01786487 +0000 UTC m=+1105.925856029" Dec 12 14:29:19 crc kubenswrapper[5108]: I1212 14:29:19.986362 5108 patch_prober.go:28] interesting pod/machine-config-daemon-w294k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 14:29:19 crc kubenswrapper[5108]: I1212 14:29:19.986880 5108 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w294k" podUID="fcb30c12-8b29-461d-ab3e-a76577b664d6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 14:29:32 crc kubenswrapper[5108]: I1212 14:29:32.906717 5108 ???:1] "http: TLS handshake error from 192.168.126.11:43036: no serving certificate available for the kubelet" Dec 12 14:29:40 crc kubenswrapper[5108]: I1212 14:29:40.182871 5108 generic.go:358] "Generic (PLEG): container finished" podID="2ae969db-5b43-49f5-b476-49139689b0af" containerID="6abde6c1bdd2f6c7440ab663a38d8dbc00aa2b2abfaa618771b6658bbcc082bd" exitCode=0 Dec 12 14:29:40 crc kubenswrapper[5108]: I1212 14:29:40.183056 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-b5pdg" event={"ID":"2ae969db-5b43-49f5-b476-49139689b0af","Type":"ContainerDied","Data":"6abde6c1bdd2f6c7440ab663a38d8dbc00aa2b2abfaa618771b6658bbcc082bd"} Dec 12 14:29:40 crc kubenswrapper[5108]: I1212 14:29:40.184453 5108 scope.go:117] "RemoveContainer" containerID="6abde6c1bdd2f6c7440ab663a38d8dbc00aa2b2abfaa618771b6658bbcc082bd" Dec 12 14:29:45 crc kubenswrapper[5108]: I1212 14:29:45.219752 5108 generic.go:358] "Generic (PLEG): container finished" podID="2ae969db-5b43-49f5-b476-49139689b0af" containerID="25cb8073b3e2d26ffe85cd47e72369dd90d6e9f7db518ad80dffe18a71ccfb99" exitCode=0 Dec 12 14:29:45 crc kubenswrapper[5108]: I1212 14:29:45.219853 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-b5pdg" event={"ID":"2ae969db-5b43-49f5-b476-49139689b0af","Type":"ContainerDied","Data":"25cb8073b3e2d26ffe85cd47e72369dd90d6e9f7db518ad80dffe18a71ccfb99"} Dec 12 14:29:46 crc kubenswrapper[5108]: I1212 14:29:46.459301 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-b5pdg" Dec 12 14:29:46 crc kubenswrapper[5108]: I1212 14:29:46.570175 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/2ae969db-5b43-49f5-b476-49139689b0af-ceilometer-publisher\") pod \"2ae969db-5b43-49f5-b476-49139689b0af\" (UID: \"2ae969db-5b43-49f5-b476-49139689b0af\") " Dec 12 14:29:46 crc kubenswrapper[5108]: I1212 14:29:46.570519 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9ww6p\" (UniqueName: \"kubernetes.io/projected/2ae969db-5b43-49f5-b476-49139689b0af-kube-api-access-9ww6p\") pod \"2ae969db-5b43-49f5-b476-49139689b0af\" (UID: \"2ae969db-5b43-49f5-b476-49139689b0af\") " Dec 12 14:29:46 crc kubenswrapper[5108]: I1212 14:29:46.570548 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/2ae969db-5b43-49f5-b476-49139689b0af-sensubility-config\") pod \"2ae969db-5b43-49f5-b476-49139689b0af\" (UID: \"2ae969db-5b43-49f5-b476-49139689b0af\") " Dec 12 14:29:46 crc kubenswrapper[5108]: I1212 14:29:46.570766 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/2ae969db-5b43-49f5-b476-49139689b0af-healthcheck-log\") pod \"2ae969db-5b43-49f5-b476-49139689b0af\" (UID: \"2ae969db-5b43-49f5-b476-49139689b0af\") " Dec 12 14:29:46 crc kubenswrapper[5108]: I1212 14:29:46.570837 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/2ae969db-5b43-49f5-b476-49139689b0af-collectd-config\") pod \"2ae969db-5b43-49f5-b476-49139689b0af\" (UID: \"2ae969db-5b43-49f5-b476-49139689b0af\") " Dec 12 14:29:46 crc kubenswrapper[5108]: I1212 14:29:46.570962 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/2ae969db-5b43-49f5-b476-49139689b0af-collectd-entrypoint-script\") pod \"2ae969db-5b43-49f5-b476-49139689b0af\" (UID: \"2ae969db-5b43-49f5-b476-49139689b0af\") " Dec 12 14:29:46 crc kubenswrapper[5108]: I1212 14:29:46.571009 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/2ae969db-5b43-49f5-b476-49139689b0af-ceilometer-entrypoint-script\") pod \"2ae969db-5b43-49f5-b476-49139689b0af\" (UID: \"2ae969db-5b43-49f5-b476-49139689b0af\") " Dec 12 14:29:46 crc kubenswrapper[5108]: I1212 14:29:46.599420 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ae969db-5b43-49f5-b476-49139689b0af-collectd-entrypoint-script" (OuterVolumeSpecName: "collectd-entrypoint-script") pod "2ae969db-5b43-49f5-b476-49139689b0af" (UID: "2ae969db-5b43-49f5-b476-49139689b0af"). InnerVolumeSpecName "collectd-entrypoint-script". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:29:46 crc kubenswrapper[5108]: I1212 14:29:46.601439 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ae969db-5b43-49f5-b476-49139689b0af-kube-api-access-9ww6p" (OuterVolumeSpecName: "kube-api-access-9ww6p") pod "2ae969db-5b43-49f5-b476-49139689b0af" (UID: "2ae969db-5b43-49f5-b476-49139689b0af"). InnerVolumeSpecName "kube-api-access-9ww6p". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:29:46 crc kubenswrapper[5108]: I1212 14:29:46.605232 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ae969db-5b43-49f5-b476-49139689b0af-sensubility-config" (OuterVolumeSpecName: "sensubility-config") pod "2ae969db-5b43-49f5-b476-49139689b0af" (UID: "2ae969db-5b43-49f5-b476-49139689b0af"). InnerVolumeSpecName "sensubility-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:29:46 crc kubenswrapper[5108]: I1212 14:29:46.607726 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ae969db-5b43-49f5-b476-49139689b0af-ceilometer-publisher" (OuterVolumeSpecName: "ceilometer-publisher") pod "2ae969db-5b43-49f5-b476-49139689b0af" (UID: "2ae969db-5b43-49f5-b476-49139689b0af"). InnerVolumeSpecName "ceilometer-publisher". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:29:46 crc kubenswrapper[5108]: I1212 14:29:46.617364 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ae969db-5b43-49f5-b476-49139689b0af-collectd-config" (OuterVolumeSpecName: "collectd-config") pod "2ae969db-5b43-49f5-b476-49139689b0af" (UID: "2ae969db-5b43-49f5-b476-49139689b0af"). InnerVolumeSpecName "collectd-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:29:46 crc kubenswrapper[5108]: I1212 14:29:46.617340 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ae969db-5b43-49f5-b476-49139689b0af-ceilometer-entrypoint-script" (OuterVolumeSpecName: "ceilometer-entrypoint-script") pod "2ae969db-5b43-49f5-b476-49139689b0af" (UID: "2ae969db-5b43-49f5-b476-49139689b0af"). InnerVolumeSpecName "ceilometer-entrypoint-script". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:29:46 crc kubenswrapper[5108]: I1212 14:29:46.617832 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ae969db-5b43-49f5-b476-49139689b0af-healthcheck-log" (OuterVolumeSpecName: "healthcheck-log") pod "2ae969db-5b43-49f5-b476-49139689b0af" (UID: "2ae969db-5b43-49f5-b476-49139689b0af"). InnerVolumeSpecName "healthcheck-log". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:29:46 crc kubenswrapper[5108]: I1212 14:29:46.672361 5108 reconciler_common.go:299] "Volume detached for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/2ae969db-5b43-49f5-b476-49139689b0af-healthcheck-log\") on node \"crc\" DevicePath \"\"" Dec 12 14:29:46 crc kubenswrapper[5108]: I1212 14:29:46.672406 5108 reconciler_common.go:299] "Volume detached for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/2ae969db-5b43-49f5-b476-49139689b0af-collectd-config\") on node \"crc\" DevicePath \"\"" Dec 12 14:29:46 crc kubenswrapper[5108]: I1212 14:29:46.672419 5108 reconciler_common.go:299] "Volume detached for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/2ae969db-5b43-49f5-b476-49139689b0af-collectd-entrypoint-script\") on node \"crc\" DevicePath \"\"" Dec 12 14:29:46 crc kubenswrapper[5108]: I1212 14:29:46.672435 5108 reconciler_common.go:299] "Volume detached for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/2ae969db-5b43-49f5-b476-49139689b0af-ceilometer-entrypoint-script\") on node \"crc\" DevicePath \"\"" Dec 12 14:29:46 crc kubenswrapper[5108]: I1212 14:29:46.672446 5108 reconciler_common.go:299] "Volume detached for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/2ae969db-5b43-49f5-b476-49139689b0af-ceilometer-publisher\") on node \"crc\" DevicePath \"\"" Dec 12 14:29:46 crc kubenswrapper[5108]: I1212 14:29:46.672459 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9ww6p\" (UniqueName: \"kubernetes.io/projected/2ae969db-5b43-49f5-b476-49139689b0af-kube-api-access-9ww6p\") on node \"crc\" DevicePath \"\"" Dec 12 14:29:46 crc kubenswrapper[5108]: I1212 14:29:46.672469 5108 reconciler_common.go:299] "Volume detached for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/2ae969db-5b43-49f5-b476-49139689b0af-sensubility-config\") on node \"crc\" DevicePath \"\"" Dec 12 14:29:47 crc kubenswrapper[5108]: I1212 14:29:47.235097 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-b5pdg" Dec 12 14:29:47 crc kubenswrapper[5108]: I1212 14:29:47.235065 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-b5pdg" event={"ID":"2ae969db-5b43-49f5-b476-49139689b0af","Type":"ContainerDied","Data":"8721536d792930d9629943e2488b3617c93623bc840fdf9c36200dd94b30e0f5"} Dec 12 14:29:47 crc kubenswrapper[5108]: I1212 14:29:47.235231 5108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8721536d792930d9629943e2488b3617c93623bc840fdf9c36200dd94b30e0f5" Dec 12 14:29:49 crc kubenswrapper[5108]: I1212 14:29:49.985980 5108 patch_prober.go:28] interesting pod/machine-config-daemon-w294k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 14:29:49 crc kubenswrapper[5108]: I1212 14:29:49.986403 5108 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w294k" podUID="fcb30c12-8b29-461d-ab3e-a76577b664d6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 14:29:49 crc kubenswrapper[5108]: I1212 14:29:49.986459 5108 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-w294k" Dec 12 14:29:49 crc kubenswrapper[5108]: I1212 14:29:49.987189 5108 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0f19199a3d9c3a6c659bccb9623a347d927104d49964c4e1d410c151cedc6fa9"} pod="openshift-machine-config-operator/machine-config-daemon-w294k" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 12 14:29:49 crc kubenswrapper[5108]: I1212 14:29:49.987256 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-w294k" podUID="fcb30c12-8b29-461d-ab3e-a76577b664d6" containerName="machine-config-daemon" containerID="cri-o://0f19199a3d9c3a6c659bccb9623a347d927104d49964c4e1d410c151cedc6fa9" gracePeriod=600 Dec 12 14:29:51 crc kubenswrapper[5108]: I1212 14:29:51.265258 5108 generic.go:358] "Generic (PLEG): container finished" podID="fcb30c12-8b29-461d-ab3e-a76577b664d6" containerID="0f19199a3d9c3a6c659bccb9623a347d927104d49964c4e1d410c151cedc6fa9" exitCode=0 Dec 12 14:29:51 crc kubenswrapper[5108]: I1212 14:29:51.265328 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w294k" event={"ID":"fcb30c12-8b29-461d-ab3e-a76577b664d6","Type":"ContainerDied","Data":"0f19199a3d9c3a6c659bccb9623a347d927104d49964c4e1d410c151cedc6fa9"} Dec 12 14:29:51 crc kubenswrapper[5108]: I1212 14:29:51.265734 5108 scope.go:117] "RemoveContainer" containerID="4ab7ac5ee0d1edb7108d7f5ec4e957c0f7674bd3372b098711a9332769c2a4ec" Dec 12 14:29:52 crc kubenswrapper[5108]: I1212 14:29:52.274691 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w294k" event={"ID":"fcb30c12-8b29-461d-ab3e-a76577b664d6","Type":"ContainerStarted","Data":"d4e2fff0d63d757d5d9730fa1e9d1084c3f5b10f916afca13fcbc803cb7bb990"} Dec 12 14:30:00 crc kubenswrapper[5108]: I1212 14:30:00.141640 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29425830-ztmgs"] Dec 12 14:30:00 crc kubenswrapper[5108]: I1212 14:30:00.143044 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2ae969db-5b43-49f5-b476-49139689b0af" containerName="smoketest-collectd" Dec 12 14:30:00 crc kubenswrapper[5108]: I1212 14:30:00.143062 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ae969db-5b43-49f5-b476-49139689b0af" containerName="smoketest-collectd" Dec 12 14:30:00 crc kubenswrapper[5108]: I1212 14:30:00.143098 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="214fd660-0ee7-42fd-866e-5bd569e19415" containerName="curl" Dec 12 14:30:00 crc kubenswrapper[5108]: I1212 14:30:00.143105 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="214fd660-0ee7-42fd-866e-5bd569e19415" containerName="curl" Dec 12 14:30:00 crc kubenswrapper[5108]: I1212 14:30:00.143142 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2ae969db-5b43-49f5-b476-49139689b0af" containerName="smoketest-ceilometer" Dec 12 14:30:00 crc kubenswrapper[5108]: I1212 14:30:00.143151 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ae969db-5b43-49f5-b476-49139689b0af" containerName="smoketest-ceilometer" Dec 12 14:30:00 crc kubenswrapper[5108]: I1212 14:30:00.143287 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="214fd660-0ee7-42fd-866e-5bd569e19415" containerName="curl" Dec 12 14:30:00 crc kubenswrapper[5108]: I1212 14:30:00.143311 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="2ae969db-5b43-49f5-b476-49139689b0af" containerName="smoketest-collectd" Dec 12 14:30:00 crc kubenswrapper[5108]: I1212 14:30:00.143321 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="2ae969db-5b43-49f5-b476-49139689b0af" containerName="smoketest-ceilometer" Dec 12 14:30:00 crc kubenswrapper[5108]: I1212 14:30:00.322953 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29425830-ztmgs"] Dec 12 14:30:00 crc kubenswrapper[5108]: I1212 14:30:00.323174 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29425830-ztmgs" Dec 12 14:30:00 crc kubenswrapper[5108]: I1212 14:30:00.326166 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Dec 12 14:30:00 crc kubenswrapper[5108]: I1212 14:30:00.326558 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Dec 12 14:30:00 crc kubenswrapper[5108]: I1212 14:30:00.467331 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/546b92fb-97a1-414f-a049-95d84e22b762-config-volume\") pod \"collect-profiles-29425830-ztmgs\" (UID: \"546b92fb-97a1-414f-a049-95d84e22b762\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425830-ztmgs" Dec 12 14:30:00 crc kubenswrapper[5108]: I1212 14:30:00.467735 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6dp4\" (UniqueName: \"kubernetes.io/projected/546b92fb-97a1-414f-a049-95d84e22b762-kube-api-access-z6dp4\") pod \"collect-profiles-29425830-ztmgs\" (UID: \"546b92fb-97a1-414f-a049-95d84e22b762\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425830-ztmgs" Dec 12 14:30:00 crc kubenswrapper[5108]: I1212 14:30:00.467817 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/546b92fb-97a1-414f-a049-95d84e22b762-secret-volume\") pod \"collect-profiles-29425830-ztmgs\" (UID: \"546b92fb-97a1-414f-a049-95d84e22b762\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425830-ztmgs" Dec 12 14:30:00 crc kubenswrapper[5108]: I1212 14:30:00.568941 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-z6dp4\" (UniqueName: \"kubernetes.io/projected/546b92fb-97a1-414f-a049-95d84e22b762-kube-api-access-z6dp4\") pod \"collect-profiles-29425830-ztmgs\" (UID: \"546b92fb-97a1-414f-a049-95d84e22b762\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425830-ztmgs" Dec 12 14:30:00 crc kubenswrapper[5108]: I1212 14:30:00.569012 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/546b92fb-97a1-414f-a049-95d84e22b762-secret-volume\") pod \"collect-profiles-29425830-ztmgs\" (UID: \"546b92fb-97a1-414f-a049-95d84e22b762\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425830-ztmgs" Dec 12 14:30:00 crc kubenswrapper[5108]: I1212 14:30:00.569044 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/546b92fb-97a1-414f-a049-95d84e22b762-config-volume\") pod \"collect-profiles-29425830-ztmgs\" (UID: \"546b92fb-97a1-414f-a049-95d84e22b762\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425830-ztmgs" Dec 12 14:30:00 crc kubenswrapper[5108]: I1212 14:30:00.574405 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/546b92fb-97a1-414f-a049-95d84e22b762-config-volume\") pod \"collect-profiles-29425830-ztmgs\" (UID: \"546b92fb-97a1-414f-a049-95d84e22b762\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425830-ztmgs" Dec 12 14:30:00 crc kubenswrapper[5108]: I1212 14:30:00.581183 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/546b92fb-97a1-414f-a049-95d84e22b762-secret-volume\") pod \"collect-profiles-29425830-ztmgs\" (UID: \"546b92fb-97a1-414f-a049-95d84e22b762\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425830-ztmgs" Dec 12 14:30:00 crc kubenswrapper[5108]: I1212 14:30:00.587301 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6dp4\" (UniqueName: \"kubernetes.io/projected/546b92fb-97a1-414f-a049-95d84e22b762-kube-api-access-z6dp4\") pod \"collect-profiles-29425830-ztmgs\" (UID: \"546b92fb-97a1-414f-a049-95d84e22b762\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425830-ztmgs" Dec 12 14:30:00 crc kubenswrapper[5108]: I1212 14:30:00.654574 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29425830-ztmgs" Dec 12 14:30:00 crc kubenswrapper[5108]: I1212 14:30:00.878120 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29425830-ztmgs"] Dec 12 14:30:01 crc kubenswrapper[5108]: I1212 14:30:01.337817 5108 generic.go:358] "Generic (PLEG): container finished" podID="546b92fb-97a1-414f-a049-95d84e22b762" containerID="c3c01f3107407ddb240cdd4d17ea69ab0edbb3a490a6e3625d107d9c044faf6b" exitCode=0 Dec 12 14:30:01 crc kubenswrapper[5108]: I1212 14:30:01.337940 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29425830-ztmgs" event={"ID":"546b92fb-97a1-414f-a049-95d84e22b762","Type":"ContainerDied","Data":"c3c01f3107407ddb240cdd4d17ea69ab0edbb3a490a6e3625d107d9c044faf6b"} Dec 12 14:30:01 crc kubenswrapper[5108]: I1212 14:30:01.338439 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29425830-ztmgs" event={"ID":"546b92fb-97a1-414f-a049-95d84e22b762","Type":"ContainerStarted","Data":"24bb3f37ef84e1f308f5c34a959e1ffbfc66c697b3af3b9b8d7cffbf4925d8dd"} Dec 12 14:30:02 crc kubenswrapper[5108]: I1212 14:30:02.583769 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29425830-ztmgs" Dec 12 14:30:02 crc kubenswrapper[5108]: I1212 14:30:02.603012 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/546b92fb-97a1-414f-a049-95d84e22b762-secret-volume\") pod \"546b92fb-97a1-414f-a049-95d84e22b762\" (UID: \"546b92fb-97a1-414f-a049-95d84e22b762\") " Dec 12 14:30:02 crc kubenswrapper[5108]: I1212 14:30:02.603130 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/546b92fb-97a1-414f-a049-95d84e22b762-config-volume\") pod \"546b92fb-97a1-414f-a049-95d84e22b762\" (UID: \"546b92fb-97a1-414f-a049-95d84e22b762\") " Dec 12 14:30:02 crc kubenswrapper[5108]: I1212 14:30:02.604221 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/546b92fb-97a1-414f-a049-95d84e22b762-config-volume" (OuterVolumeSpecName: "config-volume") pod "546b92fb-97a1-414f-a049-95d84e22b762" (UID: "546b92fb-97a1-414f-a049-95d84e22b762"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:30:02 crc kubenswrapper[5108]: I1212 14:30:02.610643 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/546b92fb-97a1-414f-a049-95d84e22b762-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "546b92fb-97a1-414f-a049-95d84e22b762" (UID: "546b92fb-97a1-414f-a049-95d84e22b762"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:30:02 crc kubenswrapper[5108]: I1212 14:30:02.704048 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z6dp4\" (UniqueName: \"kubernetes.io/projected/546b92fb-97a1-414f-a049-95d84e22b762-kube-api-access-z6dp4\") pod \"546b92fb-97a1-414f-a049-95d84e22b762\" (UID: \"546b92fb-97a1-414f-a049-95d84e22b762\") " Dec 12 14:30:02 crc kubenswrapper[5108]: I1212 14:30:02.704382 5108 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/546b92fb-97a1-414f-a049-95d84e22b762-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 12 14:30:02 crc kubenswrapper[5108]: I1212 14:30:02.704396 5108 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/546b92fb-97a1-414f-a049-95d84e22b762-config-volume\") on node \"crc\" DevicePath \"\"" Dec 12 14:30:02 crc kubenswrapper[5108]: I1212 14:30:02.708659 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/546b92fb-97a1-414f-a049-95d84e22b762-kube-api-access-z6dp4" (OuterVolumeSpecName: "kube-api-access-z6dp4") pod "546b92fb-97a1-414f-a049-95d84e22b762" (UID: "546b92fb-97a1-414f-a049-95d84e22b762"). InnerVolumeSpecName "kube-api-access-z6dp4". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:30:02 crc kubenswrapper[5108]: I1212 14:30:02.805582 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-z6dp4\" (UniqueName: \"kubernetes.io/projected/546b92fb-97a1-414f-a049-95d84e22b762-kube-api-access-z6dp4\") on node \"crc\" DevicePath \"\"" Dec 12 14:30:03 crc kubenswrapper[5108]: I1212 14:30:03.098291 5108 ???:1] "http: TLS handshake error from 192.168.126.11:59054: no serving certificate available for the kubelet" Dec 12 14:30:03 crc kubenswrapper[5108]: I1212 14:30:03.356118 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29425830-ztmgs" event={"ID":"546b92fb-97a1-414f-a049-95d84e22b762","Type":"ContainerDied","Data":"24bb3f37ef84e1f308f5c34a959e1ffbfc66c697b3af3b9b8d7cffbf4925d8dd"} Dec 12 14:30:03 crc kubenswrapper[5108]: I1212 14:30:03.356172 5108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="24bb3f37ef84e1f308f5c34a959e1ffbfc66c697b3af3b9b8d7cffbf4925d8dd" Dec 12 14:30:03 crc kubenswrapper[5108]: I1212 14:30:03.356231 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29425830-ztmgs" Dec 12 14:30:03 crc kubenswrapper[5108]: I1212 14:30:03.435745 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/infrawatch-operators-s7ph6"] Dec 12 14:30:03 crc kubenswrapper[5108]: I1212 14:30:03.436602 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="546b92fb-97a1-414f-a049-95d84e22b762" containerName="collect-profiles" Dec 12 14:30:03 crc kubenswrapper[5108]: I1212 14:30:03.436632 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="546b92fb-97a1-414f-a049-95d84e22b762" containerName="collect-profiles" Dec 12 14:30:03 crc kubenswrapper[5108]: I1212 14:30:03.436774 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="546b92fb-97a1-414f-a049-95d84e22b762" containerName="collect-profiles" Dec 12 14:30:03 crc kubenswrapper[5108]: I1212 14:30:03.445806 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-s7ph6" Dec 12 14:30:03 crc kubenswrapper[5108]: I1212 14:30:03.446690 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-s7ph6"] Dec 12 14:30:03 crc kubenswrapper[5108]: I1212 14:30:03.618607 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwjv9\" (UniqueName: \"kubernetes.io/projected/996512aa-ac28-4275-9a56-436c7a873bb2-kube-api-access-pwjv9\") pod \"infrawatch-operators-s7ph6\" (UID: \"996512aa-ac28-4275-9a56-436c7a873bb2\") " pod="service-telemetry/infrawatch-operators-s7ph6" Dec 12 14:30:03 crc kubenswrapper[5108]: I1212 14:30:03.720054 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-pwjv9\" (UniqueName: \"kubernetes.io/projected/996512aa-ac28-4275-9a56-436c7a873bb2-kube-api-access-pwjv9\") pod \"infrawatch-operators-s7ph6\" (UID: \"996512aa-ac28-4275-9a56-436c7a873bb2\") " pod="service-telemetry/infrawatch-operators-s7ph6" Dec 12 14:30:03 crc kubenswrapper[5108]: I1212 14:30:03.738661 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwjv9\" (UniqueName: \"kubernetes.io/projected/996512aa-ac28-4275-9a56-436c7a873bb2-kube-api-access-pwjv9\") pod \"infrawatch-operators-s7ph6\" (UID: \"996512aa-ac28-4275-9a56-436c7a873bb2\") " pod="service-telemetry/infrawatch-operators-s7ph6" Dec 12 14:30:03 crc kubenswrapper[5108]: I1212 14:30:03.762816 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-s7ph6" Dec 12 14:30:03 crc kubenswrapper[5108]: I1212 14:30:03.976372 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-s7ph6"] Dec 12 14:30:03 crc kubenswrapper[5108]: W1212 14:30:03.991286 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod996512aa_ac28_4275_9a56_436c7a873bb2.slice/crio-34905e8252450f9e501d0d85e56de68e7ab7e900177a28a51127e8c04c828a14 WatchSource:0}: Error finding container 34905e8252450f9e501d0d85e56de68e7ab7e900177a28a51127e8c04c828a14: Status 404 returned error can't find the container with id 34905e8252450f9e501d0d85e56de68e7ab7e900177a28a51127e8c04c828a14 Dec 12 14:30:04 crc kubenswrapper[5108]: I1212 14:30:04.365476 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-s7ph6" event={"ID":"996512aa-ac28-4275-9a56-436c7a873bb2","Type":"ContainerStarted","Data":"34905e8252450f9e501d0d85e56de68e7ab7e900177a28a51127e8c04c828a14"} Dec 12 14:30:05 crc kubenswrapper[5108]: I1212 14:30:05.373231 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-s7ph6" event={"ID":"996512aa-ac28-4275-9a56-436c7a873bb2","Type":"ContainerStarted","Data":"076f14cdb57930492bac1848f983413fd3017fb5a95c6f4c703bb9f41ce6e04e"} Dec 12 14:30:13 crc kubenswrapper[5108]: I1212 14:30:13.763498 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="service-telemetry/infrawatch-operators-s7ph6" Dec 12 14:30:13 crc kubenswrapper[5108]: I1212 14:30:13.764236 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="service-telemetry/infrawatch-operators-s7ph6" Dec 12 14:30:13 crc kubenswrapper[5108]: I1212 14:30:13.807206 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="service-telemetry/infrawatch-operators-s7ph6" Dec 12 14:30:13 crc kubenswrapper[5108]: I1212 14:30:13.826743 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/infrawatch-operators-s7ph6" podStartSLOduration=10.218331997 podStartE2EDuration="10.826725026s" podCreationTimestamp="2025-12-12 14:30:03 +0000 UTC" firstStartedPulling="2025-12-12 14:30:03.993142044 +0000 UTC m=+1156.901133203" lastFinishedPulling="2025-12-12 14:30:04.601535073 +0000 UTC m=+1157.509526232" observedRunningTime="2025-12-12 14:30:05.403275748 +0000 UTC m=+1158.311266927" watchObservedRunningTime="2025-12-12 14:30:13.826725026 +0000 UTC m=+1166.734716185" Dec 12 14:30:14 crc kubenswrapper[5108]: I1212 14:30:14.460920 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/infrawatch-operators-s7ph6" Dec 12 14:30:14 crc kubenswrapper[5108]: I1212 14:30:14.829573 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-s7ph6"] Dec 12 14:30:16 crc kubenswrapper[5108]: I1212 14:30:16.448628 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/infrawatch-operators-s7ph6" podUID="996512aa-ac28-4275-9a56-436c7a873bb2" containerName="registry-server" containerID="cri-o://076f14cdb57930492bac1848f983413fd3017fb5a95c6f4c703bb9f41ce6e04e" gracePeriod=2 Dec 12 14:30:16 crc kubenswrapper[5108]: I1212 14:30:16.807752 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-s7ph6" Dec 12 14:30:16 crc kubenswrapper[5108]: I1212 14:30:16.898028 5108 ???:1] "http: TLS handshake error from 192.168.126.11:36566: no serving certificate available for the kubelet" Dec 12 14:30:16 crc kubenswrapper[5108]: I1212 14:30:16.906588 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pwjv9\" (UniqueName: \"kubernetes.io/projected/996512aa-ac28-4275-9a56-436c7a873bb2-kube-api-access-pwjv9\") pod \"996512aa-ac28-4275-9a56-436c7a873bb2\" (UID: \"996512aa-ac28-4275-9a56-436c7a873bb2\") " Dec 12 14:30:16 crc kubenswrapper[5108]: I1212 14:30:16.917304 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/996512aa-ac28-4275-9a56-436c7a873bb2-kube-api-access-pwjv9" (OuterVolumeSpecName: "kube-api-access-pwjv9") pod "996512aa-ac28-4275-9a56-436c7a873bb2" (UID: "996512aa-ac28-4275-9a56-436c7a873bb2"). InnerVolumeSpecName "kube-api-access-pwjv9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:30:17 crc kubenswrapper[5108]: I1212 14:30:17.007789 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pwjv9\" (UniqueName: \"kubernetes.io/projected/996512aa-ac28-4275-9a56-436c7a873bb2-kube-api-access-pwjv9\") on node \"crc\" DevicePath \"\"" Dec 12 14:30:17 crc kubenswrapper[5108]: I1212 14:30:17.466031 5108 generic.go:358] "Generic (PLEG): container finished" podID="996512aa-ac28-4275-9a56-436c7a873bb2" containerID="076f14cdb57930492bac1848f983413fd3017fb5a95c6f4c703bb9f41ce6e04e" exitCode=0 Dec 12 14:30:17 crc kubenswrapper[5108]: I1212 14:30:17.466122 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-s7ph6" event={"ID":"996512aa-ac28-4275-9a56-436c7a873bb2","Type":"ContainerDied","Data":"076f14cdb57930492bac1848f983413fd3017fb5a95c6f4c703bb9f41ce6e04e"} Dec 12 14:30:17 crc kubenswrapper[5108]: I1212 14:30:17.467453 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-s7ph6" event={"ID":"996512aa-ac28-4275-9a56-436c7a873bb2","Type":"ContainerDied","Data":"34905e8252450f9e501d0d85e56de68e7ab7e900177a28a51127e8c04c828a14"} Dec 12 14:30:17 crc kubenswrapper[5108]: I1212 14:30:17.467487 5108 scope.go:117] "RemoveContainer" containerID="076f14cdb57930492bac1848f983413fd3017fb5a95c6f4c703bb9f41ce6e04e" Dec 12 14:30:17 crc kubenswrapper[5108]: I1212 14:30:17.466172 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-s7ph6" Dec 12 14:30:17 crc kubenswrapper[5108]: I1212 14:30:17.489246 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-s7ph6"] Dec 12 14:30:17 crc kubenswrapper[5108]: I1212 14:30:17.493956 5108 scope.go:117] "RemoveContainer" containerID="076f14cdb57930492bac1848f983413fd3017fb5a95c6f4c703bb9f41ce6e04e" Dec 12 14:30:17 crc kubenswrapper[5108]: I1212 14:30:17.494036 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/infrawatch-operators-s7ph6"] Dec 12 14:30:17 crc kubenswrapper[5108]: E1212 14:30:17.494349 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"076f14cdb57930492bac1848f983413fd3017fb5a95c6f4c703bb9f41ce6e04e\": container with ID starting with 076f14cdb57930492bac1848f983413fd3017fb5a95c6f4c703bb9f41ce6e04e not found: ID does not exist" containerID="076f14cdb57930492bac1848f983413fd3017fb5a95c6f4c703bb9f41ce6e04e" Dec 12 14:30:17 crc kubenswrapper[5108]: I1212 14:30:17.494386 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"076f14cdb57930492bac1848f983413fd3017fb5a95c6f4c703bb9f41ce6e04e"} err="failed to get container status \"076f14cdb57930492bac1848f983413fd3017fb5a95c6f4c703bb9f41ce6e04e\": rpc error: code = NotFound desc = could not find container \"076f14cdb57930492bac1848f983413fd3017fb5a95c6f4c703bb9f41ce6e04e\": container with ID starting with 076f14cdb57930492bac1848f983413fd3017fb5a95c6f4c703bb9f41ce6e04e not found: ID does not exist" Dec 12 14:30:19 crc kubenswrapper[5108]: I1212 14:30:19.415318 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="996512aa-ac28-4275-9a56-436c7a873bb2" path="/var/lib/kubelet/pods/996512aa-ac28-4275-9a56-436c7a873bb2/volumes" Dec 12 14:30:33 crc kubenswrapper[5108]: I1212 14:30:33.265734 5108 ???:1] "http: TLS handshake error from 192.168.126.11:41126: no serving certificate available for the kubelet" Dec 12 14:30:48 crc kubenswrapper[5108]: I1212 14:30:48.000591 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ztpws_1e8c3045-7200-4b39-9531-5ce86ab0b5b5/kube-multus/0.log" Dec 12 14:30:48 crc kubenswrapper[5108]: I1212 14:30:48.000866 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ztpws_1e8c3045-7200-4b39-9531-5ce86ab0b5b5/kube-multus/0.log" Dec 12 14:30:48 crc kubenswrapper[5108]: I1212 14:30:48.009144 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Dec 12 14:30:48 crc kubenswrapper[5108]: I1212 14:30:48.009492 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Dec 12 14:31:03 crc kubenswrapper[5108]: I1212 14:31:03.450534 5108 ???:1] "http: TLS handshake error from 192.168.126.11:50006: no serving certificate available for the kubelet" Dec 12 14:31:34 crc kubenswrapper[5108]: I1212 14:31:34.897752 5108 ???:1] "http: TLS handshake error from 192.168.126.11:42118: no serving certificate available for the kubelet" Dec 12 14:31:35 crc kubenswrapper[5108]: I1212 14:31:35.219969 5108 ???:1] "http: TLS handshake error from 192.168.126.11:42132: no serving certificate available for the kubelet" Dec 12 14:31:35 crc kubenswrapper[5108]: I1212 14:31:35.558676 5108 ???:1] "http: TLS handshake error from 192.168.126.11:42136: no serving certificate available for the kubelet" Dec 12 14:31:35 crc kubenswrapper[5108]: I1212 14:31:35.815829 5108 ???:1] "http: TLS handshake error from 192.168.126.11:42146: no serving certificate available for the kubelet" Dec 12 14:31:36 crc kubenswrapper[5108]: I1212 14:31:36.100272 5108 ???:1] "http: TLS handshake error from 192.168.126.11:42152: no serving certificate available for the kubelet" Dec 12 14:31:36 crc kubenswrapper[5108]: I1212 14:31:36.450729 5108 ???:1] "http: TLS handshake error from 192.168.126.11:42162: no serving certificate available for the kubelet" Dec 12 14:31:36 crc kubenswrapper[5108]: I1212 14:31:36.763931 5108 ???:1] "http: TLS handshake error from 192.168.126.11:42166: no serving certificate available for the kubelet" Dec 12 14:31:37 crc kubenswrapper[5108]: I1212 14:31:37.051936 5108 ???:1] "http: TLS handshake error from 192.168.126.11:42182: no serving certificate available for the kubelet" Dec 12 14:31:37 crc kubenswrapper[5108]: I1212 14:31:37.391113 5108 ???:1] "http: TLS handshake error from 192.168.126.11:42184: no serving certificate available for the kubelet" Dec 12 14:31:37 crc kubenswrapper[5108]: I1212 14:31:37.655987 5108 ???:1] "http: TLS handshake error from 192.168.126.11:42188: no serving certificate available for the kubelet" Dec 12 14:31:37 crc kubenswrapper[5108]: I1212 14:31:37.994716 5108 ???:1] "http: TLS handshake error from 192.168.126.11:42202: no serving certificate available for the kubelet" Dec 12 14:31:38 crc kubenswrapper[5108]: I1212 14:31:38.316248 5108 ???:1] "http: TLS handshake error from 192.168.126.11:42214: no serving certificate available for the kubelet" Dec 12 14:31:38 crc kubenswrapper[5108]: I1212 14:31:38.611313 5108 ???:1] "http: TLS handshake error from 192.168.126.11:42228: no serving certificate available for the kubelet" Dec 12 14:31:38 crc kubenswrapper[5108]: I1212 14:31:38.870988 5108 ???:1] "http: TLS handshake error from 192.168.126.11:42244: no serving certificate available for the kubelet" Dec 12 14:31:39 crc kubenswrapper[5108]: I1212 14:31:39.181956 5108 ???:1] "http: TLS handshake error from 192.168.126.11:42250: no serving certificate available for the kubelet" Dec 12 14:31:39 crc kubenswrapper[5108]: I1212 14:31:39.454417 5108 ???:1] "http: TLS handshake error from 192.168.126.11:42256: no serving certificate available for the kubelet" Dec 12 14:31:39 crc kubenswrapper[5108]: I1212 14:31:39.746937 5108 ???:1] "http: TLS handshake error from 192.168.126.11:42262: no serving certificate available for the kubelet" Dec 12 14:31:40 crc kubenswrapper[5108]: I1212 14:31:40.110584 5108 ???:1] "http: TLS handshake error from 192.168.126.11:42264: no serving certificate available for the kubelet" Dec 12 14:31:51 crc kubenswrapper[5108]: I1212 14:31:51.405655 5108 ???:1] "http: TLS handshake error from 192.168.126.11:39522: no serving certificate available for the kubelet" Dec 12 14:31:53 crc kubenswrapper[5108]: I1212 14:31:53.018366 5108 ???:1] "http: TLS handshake error from 192.168.126.11:39536: no serving certificate available for the kubelet" Dec 12 14:31:53 crc kubenswrapper[5108]: I1212 14:31:53.290636 5108 ???:1] "http: TLS handshake error from 192.168.126.11:39544: no serving certificate available for the kubelet" Dec 12 14:31:53 crc kubenswrapper[5108]: I1212 14:31:53.534526 5108 ???:1] "http: TLS handshake error from 192.168.126.11:39558: no serving certificate available for the kubelet" Dec 12 14:32:17 crc kubenswrapper[5108]: I1212 14:32:17.863647 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-must-gather-hhwl4/must-gather-xr4r7"] Dec 12 14:32:17 crc kubenswrapper[5108]: I1212 14:32:17.864834 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="996512aa-ac28-4275-9a56-436c7a873bb2" containerName="registry-server" Dec 12 14:32:17 crc kubenswrapper[5108]: I1212 14:32:17.864848 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="996512aa-ac28-4275-9a56-436c7a873bb2" containerName="registry-server" Dec 12 14:32:17 crc kubenswrapper[5108]: I1212 14:32:17.864982 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="996512aa-ac28-4275-9a56-436c7a873bb2" containerName="registry-server" Dec 12 14:32:17 crc kubenswrapper[5108]: I1212 14:32:17.871251 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hhwl4/must-gather-xr4r7" Dec 12 14:32:17 crc kubenswrapper[5108]: I1212 14:32:17.875238 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-hhwl4\"/\"openshift-service-ca.crt\"" Dec 12 14:32:17 crc kubenswrapper[5108]: I1212 14:32:17.875385 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-hhwl4\"/\"kube-root-ca.crt\"" Dec 12 14:32:17 crc kubenswrapper[5108]: I1212 14:32:17.875246 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-must-gather-hhwl4\"/\"default-dockercfg-lb5f7\"" Dec 12 14:32:17 crc kubenswrapper[5108]: I1212 14:32:17.879716 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-hhwl4/must-gather-xr4r7"] Dec 12 14:32:17 crc kubenswrapper[5108]: I1212 14:32:17.975269 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/969001d1-4f41-4fc8-ab93-da58f6ac8581-must-gather-output\") pod \"must-gather-xr4r7\" (UID: \"969001d1-4f41-4fc8-ab93-da58f6ac8581\") " pod="openshift-must-gather-hhwl4/must-gather-xr4r7" Dec 12 14:32:17 crc kubenswrapper[5108]: I1212 14:32:17.975404 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhgtv\" (UniqueName: \"kubernetes.io/projected/969001d1-4f41-4fc8-ab93-da58f6ac8581-kube-api-access-jhgtv\") pod \"must-gather-xr4r7\" (UID: \"969001d1-4f41-4fc8-ab93-da58f6ac8581\") " pod="openshift-must-gather-hhwl4/must-gather-xr4r7" Dec 12 14:32:18 crc kubenswrapper[5108]: I1212 14:32:18.076571 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/969001d1-4f41-4fc8-ab93-da58f6ac8581-must-gather-output\") pod \"must-gather-xr4r7\" (UID: \"969001d1-4f41-4fc8-ab93-da58f6ac8581\") " pod="openshift-must-gather-hhwl4/must-gather-xr4r7" Dec 12 14:32:18 crc kubenswrapper[5108]: I1212 14:32:18.076805 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jhgtv\" (UniqueName: \"kubernetes.io/projected/969001d1-4f41-4fc8-ab93-da58f6ac8581-kube-api-access-jhgtv\") pod \"must-gather-xr4r7\" (UID: \"969001d1-4f41-4fc8-ab93-da58f6ac8581\") " pod="openshift-must-gather-hhwl4/must-gather-xr4r7" Dec 12 14:32:18 crc kubenswrapper[5108]: I1212 14:32:18.077056 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/969001d1-4f41-4fc8-ab93-da58f6ac8581-must-gather-output\") pod \"must-gather-xr4r7\" (UID: \"969001d1-4f41-4fc8-ab93-da58f6ac8581\") " pod="openshift-must-gather-hhwl4/must-gather-xr4r7" Dec 12 14:32:18 crc kubenswrapper[5108]: I1212 14:32:18.097913 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhgtv\" (UniqueName: \"kubernetes.io/projected/969001d1-4f41-4fc8-ab93-da58f6ac8581-kube-api-access-jhgtv\") pod \"must-gather-xr4r7\" (UID: \"969001d1-4f41-4fc8-ab93-da58f6ac8581\") " pod="openshift-must-gather-hhwl4/must-gather-xr4r7" Dec 12 14:32:18 crc kubenswrapper[5108]: I1212 14:32:18.187170 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hhwl4/must-gather-xr4r7" Dec 12 14:32:18 crc kubenswrapper[5108]: I1212 14:32:18.407596 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-hhwl4/must-gather-xr4r7"] Dec 12 14:32:18 crc kubenswrapper[5108]: I1212 14:32:18.415635 5108 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 12 14:32:19 crc kubenswrapper[5108]: I1212 14:32:19.371802 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hhwl4/must-gather-xr4r7" event={"ID":"969001d1-4f41-4fc8-ab93-da58f6ac8581","Type":"ContainerStarted","Data":"264512cb029bc066bf900a82e5285b973d284710a83ada69f45d25a0b03af985"} Dec 12 14:32:19 crc kubenswrapper[5108]: I1212 14:32:19.986584 5108 patch_prober.go:28] interesting pod/machine-config-daemon-w294k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 14:32:19 crc kubenswrapper[5108]: I1212 14:32:19.986937 5108 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w294k" podUID="fcb30c12-8b29-461d-ab3e-a76577b664d6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 14:32:24 crc kubenswrapper[5108]: I1212 14:32:24.408834 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hhwl4/must-gather-xr4r7" event={"ID":"969001d1-4f41-4fc8-ab93-da58f6ac8581","Type":"ContainerStarted","Data":"a6c3dc8ebb03ba76f260143484662cef06846377a4398682e03674b66a13d3af"} Dec 12 14:32:24 crc kubenswrapper[5108]: I1212 14:32:24.409293 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hhwl4/must-gather-xr4r7" event={"ID":"969001d1-4f41-4fc8-ab93-da58f6ac8581","Type":"ContainerStarted","Data":"c31d2c66e7bdac35ef14a3004f2fdbeb3033be497b260957c8c1cdd19becf986"} Dec 12 14:32:27 crc kubenswrapper[5108]: I1212 14:32:27.858488 5108 ???:1] "http: TLS handshake error from 192.168.126.11:33212: no serving certificate available for the kubelet" Dec 12 14:32:49 crc kubenswrapper[5108]: I1212 14:32:49.986566 5108 patch_prober.go:28] interesting pod/machine-config-daemon-w294k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 14:32:49 crc kubenswrapper[5108]: I1212 14:32:49.987525 5108 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w294k" podUID="fcb30c12-8b29-461d-ab3e-a76577b664d6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 14:33:00 crc kubenswrapper[5108]: I1212 14:33:00.762416 5108 ???:1] "http: TLS handshake error from 192.168.126.11:60708: no serving certificate available for the kubelet" Dec 12 14:33:00 crc kubenswrapper[5108]: I1212 14:33:00.982183 5108 ???:1] "http: TLS handshake error from 192.168.126.11:60722: no serving certificate available for the kubelet" Dec 12 14:33:01 crc kubenswrapper[5108]: I1212 14:33:01.093600 5108 ???:1] "http: TLS handshake error from 192.168.126.11:60724: no serving certificate available for the kubelet" Dec 12 14:33:01 crc kubenswrapper[5108]: I1212 14:33:01.116635 5108 ???:1] "http: TLS handshake error from 192.168.126.11:60726: no serving certificate available for the kubelet" Dec 12 14:33:12 crc kubenswrapper[5108]: I1212 14:33:12.631333 5108 ???:1] "http: TLS handshake error from 192.168.126.11:37900: no serving certificate available for the kubelet" Dec 12 14:33:12 crc kubenswrapper[5108]: I1212 14:33:12.766321 5108 ???:1] "http: TLS handshake error from 192.168.126.11:37912: no serving certificate available for the kubelet" Dec 12 14:33:12 crc kubenswrapper[5108]: I1212 14:33:12.792820 5108 ???:1] "http: TLS handshake error from 192.168.126.11:37916: no serving certificate available for the kubelet" Dec 12 14:33:19 crc kubenswrapper[5108]: I1212 14:33:19.986204 5108 patch_prober.go:28] interesting pod/machine-config-daemon-w294k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 14:33:19 crc kubenswrapper[5108]: I1212 14:33:19.986750 5108 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w294k" podUID="fcb30c12-8b29-461d-ab3e-a76577b664d6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 14:33:19 crc kubenswrapper[5108]: I1212 14:33:19.986792 5108 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-w294k" Dec 12 14:33:19 crc kubenswrapper[5108]: I1212 14:33:19.987408 5108 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d4e2fff0d63d757d5d9730fa1e9d1084c3f5b10f916afca13fcbc803cb7bb990"} pod="openshift-machine-config-operator/machine-config-daemon-w294k" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 12 14:33:19 crc kubenswrapper[5108]: I1212 14:33:19.987464 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-w294k" podUID="fcb30c12-8b29-461d-ab3e-a76577b664d6" containerName="machine-config-daemon" containerID="cri-o://d4e2fff0d63d757d5d9730fa1e9d1084c3f5b10f916afca13fcbc803cb7bb990" gracePeriod=600 Dec 12 14:33:20 crc kubenswrapper[5108]: I1212 14:33:20.826179 5108 generic.go:358] "Generic (PLEG): container finished" podID="fcb30c12-8b29-461d-ab3e-a76577b664d6" containerID="d4e2fff0d63d757d5d9730fa1e9d1084c3f5b10f916afca13fcbc803cb7bb990" exitCode=0 Dec 12 14:33:20 crc kubenswrapper[5108]: I1212 14:33:20.826252 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w294k" event={"ID":"fcb30c12-8b29-461d-ab3e-a76577b664d6","Type":"ContainerDied","Data":"d4e2fff0d63d757d5d9730fa1e9d1084c3f5b10f916afca13fcbc803cb7bb990"} Dec 12 14:33:20 crc kubenswrapper[5108]: I1212 14:33:20.826804 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w294k" event={"ID":"fcb30c12-8b29-461d-ab3e-a76577b664d6","Type":"ContainerStarted","Data":"c3befc983679f936697f0012c92f20182d05892c76d38c16f39e387bc3c7f84b"} Dec 12 14:33:20 crc kubenswrapper[5108]: I1212 14:33:20.826827 5108 scope.go:117] "RemoveContainer" containerID="0f19199a3d9c3a6c659bccb9623a347d927104d49964c4e1d410c151cedc6fa9" Dec 12 14:33:20 crc kubenswrapper[5108]: I1212 14:33:20.848175 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-hhwl4/must-gather-xr4r7" podStartSLOduration=59.008423041 podStartE2EDuration="1m3.848160909s" podCreationTimestamp="2025-12-12 14:32:17 +0000 UTC" firstStartedPulling="2025-12-12 14:32:18.415860786 +0000 UTC m=+1291.323851945" lastFinishedPulling="2025-12-12 14:32:23.255598654 +0000 UTC m=+1296.163589813" observedRunningTime="2025-12-12 14:32:24.427419535 +0000 UTC m=+1297.335410704" watchObservedRunningTime="2025-12-12 14:33:20.848160909 +0000 UTC m=+1353.756152068" Dec 12 14:33:26 crc kubenswrapper[5108]: I1212 14:33:26.606621 5108 ???:1] "http: TLS handshake error from 192.168.126.11:33362: no serving certificate available for the kubelet" Dec 12 14:33:26 crc kubenswrapper[5108]: I1212 14:33:26.756504 5108 ???:1] "http: TLS handshake error from 192.168.126.11:33370: no serving certificate available for the kubelet" Dec 12 14:33:26 crc kubenswrapper[5108]: I1212 14:33:26.789315 5108 ???:1] "http: TLS handshake error from 192.168.126.11:33378: no serving certificate available for the kubelet" Dec 12 14:33:26 crc kubenswrapper[5108]: I1212 14:33:26.804902 5108 ???:1] "http: TLS handshake error from 192.168.126.11:33382: no serving certificate available for the kubelet" Dec 12 14:33:26 crc kubenswrapper[5108]: I1212 14:33:26.953362 5108 ???:1] "http: TLS handshake error from 192.168.126.11:33386: no serving certificate available for the kubelet" Dec 12 14:33:26 crc kubenswrapper[5108]: I1212 14:33:26.970825 5108 ???:1] "http: TLS handshake error from 192.168.126.11:33402: no serving certificate available for the kubelet" Dec 12 14:33:27 crc kubenswrapper[5108]: I1212 14:33:27.011095 5108 ???:1] "http: TLS handshake error from 192.168.126.11:33416: no serving certificate available for the kubelet" Dec 12 14:33:27 crc kubenswrapper[5108]: I1212 14:33:27.113809 5108 ???:1] "http: TLS handshake error from 192.168.126.11:33420: no serving certificate available for the kubelet" Dec 12 14:33:27 crc kubenswrapper[5108]: I1212 14:33:27.259221 5108 ???:1] "http: TLS handshake error from 192.168.126.11:33436: no serving certificate available for the kubelet" Dec 12 14:33:27 crc kubenswrapper[5108]: I1212 14:33:27.283560 5108 ???:1] "http: TLS handshake error from 192.168.126.11:33450: no serving certificate available for the kubelet" Dec 12 14:33:27 crc kubenswrapper[5108]: I1212 14:33:27.288881 5108 ???:1] "http: TLS handshake error from 192.168.126.11:33454: no serving certificate available for the kubelet" Dec 12 14:33:27 crc kubenswrapper[5108]: I1212 14:33:27.452538 5108 ???:1] "http: TLS handshake error from 192.168.126.11:33460: no serving certificate available for the kubelet" Dec 12 14:33:27 crc kubenswrapper[5108]: I1212 14:33:27.454906 5108 ???:1] "http: TLS handshake error from 192.168.126.11:33462: no serving certificate available for the kubelet" Dec 12 14:33:27 crc kubenswrapper[5108]: I1212 14:33:27.457196 5108 ???:1] "http: TLS handshake error from 192.168.126.11:33478: no serving certificate available for the kubelet" Dec 12 14:33:27 crc kubenswrapper[5108]: I1212 14:33:27.590480 5108 ???:1] "http: TLS handshake error from 192.168.126.11:33490: no serving certificate available for the kubelet" Dec 12 14:33:27 crc kubenswrapper[5108]: I1212 14:33:27.791678 5108 ???:1] "http: TLS handshake error from 192.168.126.11:33506: no serving certificate available for the kubelet" Dec 12 14:33:27 crc kubenswrapper[5108]: I1212 14:33:27.793066 5108 ???:1] "http: TLS handshake error from 192.168.126.11:33518: no serving certificate available for the kubelet" Dec 12 14:33:27 crc kubenswrapper[5108]: I1212 14:33:27.800406 5108 ???:1] "http: TLS handshake error from 192.168.126.11:33522: no serving certificate available for the kubelet" Dec 12 14:33:27 crc kubenswrapper[5108]: I1212 14:33:27.945694 5108 ???:1] "http: TLS handshake error from 192.168.126.11:33534: no serving certificate available for the kubelet" Dec 12 14:33:27 crc kubenswrapper[5108]: I1212 14:33:27.960341 5108 ???:1] "http: TLS handshake error from 192.168.126.11:33548: no serving certificate available for the kubelet" Dec 12 14:33:27 crc kubenswrapper[5108]: I1212 14:33:27.989412 5108 ???:1] "http: TLS handshake error from 192.168.126.11:33556: no serving certificate available for the kubelet" Dec 12 14:33:28 crc kubenswrapper[5108]: I1212 14:33:28.140605 5108 ???:1] "http: TLS handshake error from 192.168.126.11:33560: no serving certificate available for the kubelet" Dec 12 14:33:28 crc kubenswrapper[5108]: I1212 14:33:28.273444 5108 ???:1] "http: TLS handshake error from 192.168.126.11:33572: no serving certificate available for the kubelet" Dec 12 14:33:28 crc kubenswrapper[5108]: I1212 14:33:28.292228 5108 ???:1] "http: TLS handshake error from 192.168.126.11:33578: no serving certificate available for the kubelet" Dec 12 14:33:28 crc kubenswrapper[5108]: I1212 14:33:28.300294 5108 ???:1] "http: TLS handshake error from 192.168.126.11:33588: no serving certificate available for the kubelet" Dec 12 14:33:28 crc kubenswrapper[5108]: I1212 14:33:28.467584 5108 ???:1] "http: TLS handshake error from 192.168.126.11:33590: no serving certificate available for the kubelet" Dec 12 14:33:28 crc kubenswrapper[5108]: I1212 14:33:28.481362 5108 ???:1] "http: TLS handshake error from 192.168.126.11:33592: no serving certificate available for the kubelet" Dec 12 14:33:28 crc kubenswrapper[5108]: I1212 14:33:28.486179 5108 ???:1] "http: TLS handshake error from 192.168.126.11:33594: no serving certificate available for the kubelet" Dec 12 14:33:28 crc kubenswrapper[5108]: I1212 14:33:28.638780 5108 ???:1] "http: TLS handshake error from 192.168.126.11:33608: no serving certificate available for the kubelet" Dec 12 14:33:28 crc kubenswrapper[5108]: I1212 14:33:28.807032 5108 ???:1] "http: TLS handshake error from 192.168.126.11:33618: no serving certificate available for the kubelet" Dec 12 14:33:28 crc kubenswrapper[5108]: I1212 14:33:28.830653 5108 ???:1] "http: TLS handshake error from 192.168.126.11:33630: no serving certificate available for the kubelet" Dec 12 14:33:28 crc kubenswrapper[5108]: I1212 14:33:28.835909 5108 ???:1] "http: TLS handshake error from 192.168.126.11:33632: no serving certificate available for the kubelet" Dec 12 14:33:28 crc kubenswrapper[5108]: I1212 14:33:28.987620 5108 ???:1] "http: TLS handshake error from 192.168.126.11:33648: no serving certificate available for the kubelet" Dec 12 14:33:29 crc kubenswrapper[5108]: I1212 14:33:29.004697 5108 ???:1] "http: TLS handshake error from 192.168.126.11:33662: no serving certificate available for the kubelet" Dec 12 14:33:29 crc kubenswrapper[5108]: I1212 14:33:29.010443 5108 ???:1] "http: TLS handshake error from 192.168.126.11:33666: no serving certificate available for the kubelet" Dec 12 14:33:29 crc kubenswrapper[5108]: I1212 14:33:29.154030 5108 ???:1] "http: TLS handshake error from 192.168.126.11:33668: no serving certificate available for the kubelet" Dec 12 14:33:29 crc kubenswrapper[5108]: I1212 14:33:29.290237 5108 ???:1] "http: TLS handshake error from 192.168.126.11:33678: no serving certificate available for the kubelet" Dec 12 14:33:29 crc kubenswrapper[5108]: I1212 14:33:29.295549 5108 ???:1] "http: TLS handshake error from 192.168.126.11:33692: no serving certificate available for the kubelet" Dec 12 14:33:29 crc kubenswrapper[5108]: I1212 14:33:29.305191 5108 ???:1] "http: TLS handshake error from 192.168.126.11:33698: no serving certificate available for the kubelet" Dec 12 14:33:29 crc kubenswrapper[5108]: I1212 14:33:29.465739 5108 ???:1] "http: TLS handshake error from 192.168.126.11:33700: no serving certificate available for the kubelet" Dec 12 14:33:29 crc kubenswrapper[5108]: I1212 14:33:29.470594 5108 ???:1] "http: TLS handshake error from 192.168.126.11:33706: no serving certificate available for the kubelet" Dec 12 14:33:29 crc kubenswrapper[5108]: I1212 14:33:29.496753 5108 ???:1] "http: TLS handshake error from 192.168.126.11:33714: no serving certificate available for the kubelet" Dec 12 14:33:29 crc kubenswrapper[5108]: I1212 14:33:29.502570 5108 ???:1] "http: TLS handshake error from 192.168.126.11:33722: no serving certificate available for the kubelet" Dec 12 14:33:29 crc kubenswrapper[5108]: I1212 14:33:29.649564 5108 ???:1] "http: TLS handshake error from 192.168.126.11:33738: no serving certificate available for the kubelet" Dec 12 14:33:29 crc kubenswrapper[5108]: I1212 14:33:29.770951 5108 ???:1] "http: TLS handshake error from 192.168.126.11:33744: no serving certificate available for the kubelet" Dec 12 14:33:29 crc kubenswrapper[5108]: I1212 14:33:29.793647 5108 ???:1] "http: TLS handshake error from 192.168.126.11:33758: no serving certificate available for the kubelet" Dec 12 14:33:29 crc kubenswrapper[5108]: I1212 14:33:29.800335 5108 ???:1] "http: TLS handshake error from 192.168.126.11:33760: no serving certificate available for the kubelet" Dec 12 14:33:29 crc kubenswrapper[5108]: I1212 14:33:29.966738 5108 ???:1] "http: TLS handshake error from 192.168.126.11:33764: no serving certificate available for the kubelet" Dec 12 14:33:29 crc kubenswrapper[5108]: I1212 14:33:29.970649 5108 ???:1] "http: TLS handshake error from 192.168.126.11:33770: no serving certificate available for the kubelet" Dec 12 14:33:29 crc kubenswrapper[5108]: I1212 14:33:29.981427 5108 ???:1] "http: TLS handshake error from 192.168.126.11:33772: no serving certificate available for the kubelet" Dec 12 14:33:40 crc kubenswrapper[5108]: I1212 14:33:40.982336 5108 ???:1] "http: TLS handshake error from 192.168.126.11:41474: no serving certificate available for the kubelet" Dec 12 14:33:41 crc kubenswrapper[5108]: I1212 14:33:41.092997 5108 ???:1] "http: TLS handshake error from 192.168.126.11:41480: no serving certificate available for the kubelet" Dec 12 14:33:41 crc kubenswrapper[5108]: I1212 14:33:41.152464 5108 ???:1] "http: TLS handshake error from 192.168.126.11:41490: no serving certificate available for the kubelet" Dec 12 14:33:41 crc kubenswrapper[5108]: I1212 14:33:41.264002 5108 ???:1] "http: TLS handshake error from 192.168.126.11:41500: no serving certificate available for the kubelet" Dec 12 14:33:41 crc kubenswrapper[5108]: I1212 14:33:41.329411 5108 ???:1] "http: TLS handshake error from 192.168.126.11:41516: no serving certificate available for the kubelet" Dec 12 14:34:20 crc kubenswrapper[5108]: I1212 14:34:20.273197 5108 generic.go:358] "Generic (PLEG): container finished" podID="969001d1-4f41-4fc8-ab93-da58f6ac8581" containerID="c31d2c66e7bdac35ef14a3004f2fdbeb3033be497b260957c8c1cdd19becf986" exitCode=0 Dec 12 14:34:20 crc kubenswrapper[5108]: I1212 14:34:20.273289 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hhwl4/must-gather-xr4r7" event={"ID":"969001d1-4f41-4fc8-ab93-da58f6ac8581","Type":"ContainerDied","Data":"c31d2c66e7bdac35ef14a3004f2fdbeb3033be497b260957c8c1cdd19becf986"} Dec 12 14:34:20 crc kubenswrapper[5108]: I1212 14:34:20.274312 5108 scope.go:117] "RemoveContainer" containerID="c31d2c66e7bdac35ef14a3004f2fdbeb3033be497b260957c8c1cdd19becf986" Dec 12 14:34:27 crc kubenswrapper[5108]: I1212 14:34:27.884576 5108 ???:1] "http: TLS handshake error from 192.168.126.11:55946: no serving certificate available for the kubelet" Dec 12 14:34:28 crc kubenswrapper[5108]: I1212 14:34:28.048651 5108 ???:1] "http: TLS handshake error from 192.168.126.11:55956: no serving certificate available for the kubelet" Dec 12 14:34:28 crc kubenswrapper[5108]: I1212 14:34:28.060224 5108 ???:1] "http: TLS handshake error from 192.168.126.11:55960: no serving certificate available for the kubelet" Dec 12 14:34:28 crc kubenswrapper[5108]: I1212 14:34:28.084982 5108 ???:1] "http: TLS handshake error from 192.168.126.11:55972: no serving certificate available for the kubelet" Dec 12 14:34:28 crc kubenswrapper[5108]: I1212 14:34:28.094994 5108 ???:1] "http: TLS handshake error from 192.168.126.11:55984: no serving certificate available for the kubelet" Dec 12 14:34:28 crc kubenswrapper[5108]: I1212 14:34:28.108467 5108 ???:1] "http: TLS handshake error from 192.168.126.11:55996: no serving certificate available for the kubelet" Dec 12 14:34:28 crc kubenswrapper[5108]: I1212 14:34:28.123244 5108 ???:1] "http: TLS handshake error from 192.168.126.11:56004: no serving certificate available for the kubelet" Dec 12 14:34:28 crc kubenswrapper[5108]: I1212 14:34:28.137428 5108 ???:1] "http: TLS handshake error from 192.168.126.11:56010: no serving certificate available for the kubelet" Dec 12 14:34:28 crc kubenswrapper[5108]: I1212 14:34:28.149895 5108 ???:1] "http: TLS handshake error from 192.168.126.11:56012: no serving certificate available for the kubelet" Dec 12 14:34:28 crc kubenswrapper[5108]: I1212 14:34:28.299274 5108 ???:1] "http: TLS handshake error from 192.168.126.11:56020: no serving certificate available for the kubelet" Dec 12 14:34:28 crc kubenswrapper[5108]: I1212 14:34:28.310895 5108 ???:1] "http: TLS handshake error from 192.168.126.11:56030: no serving certificate available for the kubelet" Dec 12 14:34:28 crc kubenswrapper[5108]: I1212 14:34:28.338595 5108 ???:1] "http: TLS handshake error from 192.168.126.11:56044: no serving certificate available for the kubelet" Dec 12 14:34:28 crc kubenswrapper[5108]: I1212 14:34:28.349376 5108 ???:1] "http: TLS handshake error from 192.168.126.11:56048: no serving certificate available for the kubelet" Dec 12 14:34:28 crc kubenswrapper[5108]: I1212 14:34:28.366362 5108 ???:1] "http: TLS handshake error from 192.168.126.11:56064: no serving certificate available for the kubelet" Dec 12 14:34:28 crc kubenswrapper[5108]: I1212 14:34:28.376854 5108 ???:1] "http: TLS handshake error from 192.168.126.11:56074: no serving certificate available for the kubelet" Dec 12 14:34:28 crc kubenswrapper[5108]: I1212 14:34:28.393109 5108 ???:1] "http: TLS handshake error from 192.168.126.11:56084: no serving certificate available for the kubelet" Dec 12 14:34:28 crc kubenswrapper[5108]: I1212 14:34:28.406742 5108 ???:1] "http: TLS handshake error from 192.168.126.11:56094: no serving certificate available for the kubelet" Dec 12 14:34:33 crc kubenswrapper[5108]: I1212 14:34:33.450651 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-hhwl4/must-gather-xr4r7"] Dec 12 14:34:33 crc kubenswrapper[5108]: I1212 14:34:33.451533 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-must-gather-hhwl4/must-gather-xr4r7" podUID="969001d1-4f41-4fc8-ab93-da58f6ac8581" containerName="copy" containerID="cri-o://a6c3dc8ebb03ba76f260143484662cef06846377a4398682e03674b66a13d3af" gracePeriod=2 Dec 12 14:34:33 crc kubenswrapper[5108]: I1212 14:34:33.453025 5108 status_manager.go:895] "Failed to get status for pod" podUID="969001d1-4f41-4fc8-ab93-da58f6ac8581" pod="openshift-must-gather-hhwl4/must-gather-xr4r7" err="pods \"must-gather-xr4r7\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-hhwl4\": no relationship found between node 'crc' and this object" Dec 12 14:34:33 crc kubenswrapper[5108]: I1212 14:34:33.457723 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-hhwl4/must-gather-xr4r7"] Dec 12 14:34:33 crc kubenswrapper[5108]: I1212 14:34:33.850178 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-hhwl4_must-gather-xr4r7_969001d1-4f41-4fc8-ab93-da58f6ac8581/copy/0.log" Dec 12 14:34:33 crc kubenswrapper[5108]: I1212 14:34:33.850820 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hhwl4/must-gather-xr4r7" Dec 12 14:34:33 crc kubenswrapper[5108]: I1212 14:34:33.852558 5108 status_manager.go:895] "Failed to get status for pod" podUID="969001d1-4f41-4fc8-ab93-da58f6ac8581" pod="openshift-must-gather-hhwl4/must-gather-xr4r7" err="pods \"must-gather-xr4r7\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-hhwl4\": no relationship found between node 'crc' and this object" Dec 12 14:34:33 crc kubenswrapper[5108]: I1212 14:34:33.936325 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/969001d1-4f41-4fc8-ab93-da58f6ac8581-must-gather-output\") pod \"969001d1-4f41-4fc8-ab93-da58f6ac8581\" (UID: \"969001d1-4f41-4fc8-ab93-da58f6ac8581\") " Dec 12 14:34:33 crc kubenswrapper[5108]: I1212 14:34:33.936647 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhgtv\" (UniqueName: \"kubernetes.io/projected/969001d1-4f41-4fc8-ab93-da58f6ac8581-kube-api-access-jhgtv\") pod \"969001d1-4f41-4fc8-ab93-da58f6ac8581\" (UID: \"969001d1-4f41-4fc8-ab93-da58f6ac8581\") " Dec 12 14:34:33 crc kubenswrapper[5108]: I1212 14:34:33.945333 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/969001d1-4f41-4fc8-ab93-da58f6ac8581-kube-api-access-jhgtv" (OuterVolumeSpecName: "kube-api-access-jhgtv") pod "969001d1-4f41-4fc8-ab93-da58f6ac8581" (UID: "969001d1-4f41-4fc8-ab93-da58f6ac8581"). InnerVolumeSpecName "kube-api-access-jhgtv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:34:33 crc kubenswrapper[5108]: I1212 14:34:33.983119 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/969001d1-4f41-4fc8-ab93-da58f6ac8581-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "969001d1-4f41-4fc8-ab93-da58f6ac8581" (UID: "969001d1-4f41-4fc8-ab93-da58f6ac8581"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:34:34 crc kubenswrapper[5108]: I1212 14:34:34.038753 5108 reconciler_common.go:299] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/969001d1-4f41-4fc8-ab93-da58f6ac8581-must-gather-output\") on node \"crc\" DevicePath \"\"" Dec 12 14:34:34 crc kubenswrapper[5108]: I1212 14:34:34.038784 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jhgtv\" (UniqueName: \"kubernetes.io/projected/969001d1-4f41-4fc8-ab93-da58f6ac8581-kube-api-access-jhgtv\") on node \"crc\" DevicePath \"\"" Dec 12 14:34:34 crc kubenswrapper[5108]: I1212 14:34:34.380482 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-hhwl4_must-gather-xr4r7_969001d1-4f41-4fc8-ab93-da58f6ac8581/copy/0.log" Dec 12 14:34:34 crc kubenswrapper[5108]: I1212 14:34:34.381180 5108 generic.go:358] "Generic (PLEG): container finished" podID="969001d1-4f41-4fc8-ab93-da58f6ac8581" containerID="a6c3dc8ebb03ba76f260143484662cef06846377a4398682e03674b66a13d3af" exitCode=143 Dec 12 14:34:34 crc kubenswrapper[5108]: I1212 14:34:34.381248 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hhwl4/must-gather-xr4r7" Dec 12 14:34:34 crc kubenswrapper[5108]: I1212 14:34:34.381362 5108 scope.go:117] "RemoveContainer" containerID="a6c3dc8ebb03ba76f260143484662cef06846377a4398682e03674b66a13d3af" Dec 12 14:34:34 crc kubenswrapper[5108]: I1212 14:34:34.383013 5108 status_manager.go:895] "Failed to get status for pod" podUID="969001d1-4f41-4fc8-ab93-da58f6ac8581" pod="openshift-must-gather-hhwl4/must-gather-xr4r7" err="pods \"must-gather-xr4r7\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-hhwl4\": no relationship found between node 'crc' and this object" Dec 12 14:34:34 crc kubenswrapper[5108]: I1212 14:34:34.399418 5108 status_manager.go:895] "Failed to get status for pod" podUID="969001d1-4f41-4fc8-ab93-da58f6ac8581" pod="openshift-must-gather-hhwl4/must-gather-xr4r7" err="pods \"must-gather-xr4r7\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-hhwl4\": no relationship found between node 'crc' and this object" Dec 12 14:34:34 crc kubenswrapper[5108]: I1212 14:34:34.403925 5108 scope.go:117] "RemoveContainer" containerID="c31d2c66e7bdac35ef14a3004f2fdbeb3033be497b260957c8c1cdd19becf986" Dec 12 14:34:34 crc kubenswrapper[5108]: I1212 14:34:34.476829 5108 scope.go:117] "RemoveContainer" containerID="a6c3dc8ebb03ba76f260143484662cef06846377a4398682e03674b66a13d3af" Dec 12 14:34:34 crc kubenswrapper[5108]: E1212 14:34:34.477217 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a6c3dc8ebb03ba76f260143484662cef06846377a4398682e03674b66a13d3af\": container with ID starting with a6c3dc8ebb03ba76f260143484662cef06846377a4398682e03674b66a13d3af not found: ID does not exist" containerID="a6c3dc8ebb03ba76f260143484662cef06846377a4398682e03674b66a13d3af" Dec 12 14:34:34 crc kubenswrapper[5108]: I1212 14:34:34.477265 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6c3dc8ebb03ba76f260143484662cef06846377a4398682e03674b66a13d3af"} err="failed to get container status \"a6c3dc8ebb03ba76f260143484662cef06846377a4398682e03674b66a13d3af\": rpc error: code = NotFound desc = could not find container \"a6c3dc8ebb03ba76f260143484662cef06846377a4398682e03674b66a13d3af\": container with ID starting with a6c3dc8ebb03ba76f260143484662cef06846377a4398682e03674b66a13d3af not found: ID does not exist" Dec 12 14:34:34 crc kubenswrapper[5108]: I1212 14:34:34.477315 5108 scope.go:117] "RemoveContainer" containerID="c31d2c66e7bdac35ef14a3004f2fdbeb3033be497b260957c8c1cdd19becf986" Dec 12 14:34:34 crc kubenswrapper[5108]: E1212 14:34:34.477523 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c31d2c66e7bdac35ef14a3004f2fdbeb3033be497b260957c8c1cdd19becf986\": container with ID starting with c31d2c66e7bdac35ef14a3004f2fdbeb3033be497b260957c8c1cdd19becf986 not found: ID does not exist" containerID="c31d2c66e7bdac35ef14a3004f2fdbeb3033be497b260957c8c1cdd19becf986" Dec 12 14:34:34 crc kubenswrapper[5108]: I1212 14:34:34.477542 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c31d2c66e7bdac35ef14a3004f2fdbeb3033be497b260957c8c1cdd19becf986"} err="failed to get container status \"c31d2c66e7bdac35ef14a3004f2fdbeb3033be497b260957c8c1cdd19becf986\": rpc error: code = NotFound desc = could not find container \"c31d2c66e7bdac35ef14a3004f2fdbeb3033be497b260957c8c1cdd19becf986\": container with ID starting with c31d2c66e7bdac35ef14a3004f2fdbeb3033be497b260957c8c1cdd19becf986 not found: ID does not exist" Dec 12 14:34:35 crc kubenswrapper[5108]: I1212 14:34:35.418013 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="969001d1-4f41-4fc8-ab93-da58f6ac8581" path="/var/lib/kubelet/pods/969001d1-4f41-4fc8-ab93-da58f6ac8581/volumes" Dec 12 14:34:42 crc kubenswrapper[5108]: I1212 14:34:42.879231 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-dsx5t"] Dec 12 14:34:42 crc kubenswrapper[5108]: I1212 14:34:42.881575 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="969001d1-4f41-4fc8-ab93-da58f6ac8581" containerName="copy" Dec 12 14:34:42 crc kubenswrapper[5108]: I1212 14:34:42.881683 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="969001d1-4f41-4fc8-ab93-da58f6ac8581" containerName="copy" Dec 12 14:34:42 crc kubenswrapper[5108]: I1212 14:34:42.881771 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="969001d1-4f41-4fc8-ab93-da58f6ac8581" containerName="gather" Dec 12 14:34:42 crc kubenswrapper[5108]: I1212 14:34:42.881829 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="969001d1-4f41-4fc8-ab93-da58f6ac8581" containerName="gather" Dec 12 14:34:42 crc kubenswrapper[5108]: I1212 14:34:42.882023 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="969001d1-4f41-4fc8-ab93-da58f6ac8581" containerName="gather" Dec 12 14:34:42 crc kubenswrapper[5108]: I1212 14:34:42.882111 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="969001d1-4f41-4fc8-ab93-da58f6ac8581" containerName="copy" Dec 12 14:34:42 crc kubenswrapper[5108]: I1212 14:34:42.886453 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dsx5t" Dec 12 14:34:42 crc kubenswrapper[5108]: I1212 14:34:42.892237 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dsx5t"] Dec 12 14:34:42 crc kubenswrapper[5108]: I1212 14:34:42.967659 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ebf140b8-eebf-48b4-bf30-e28ceb656715-utilities\") pod \"community-operators-dsx5t\" (UID: \"ebf140b8-eebf-48b4-bf30-e28ceb656715\") " pod="openshift-marketplace/community-operators-dsx5t" Dec 12 14:34:42 crc kubenswrapper[5108]: I1212 14:34:42.967956 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktvq8\" (UniqueName: \"kubernetes.io/projected/ebf140b8-eebf-48b4-bf30-e28ceb656715-kube-api-access-ktvq8\") pod \"community-operators-dsx5t\" (UID: \"ebf140b8-eebf-48b4-bf30-e28ceb656715\") " pod="openshift-marketplace/community-operators-dsx5t" Dec 12 14:34:42 crc kubenswrapper[5108]: I1212 14:34:42.968074 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ebf140b8-eebf-48b4-bf30-e28ceb656715-catalog-content\") pod \"community-operators-dsx5t\" (UID: \"ebf140b8-eebf-48b4-bf30-e28ceb656715\") " pod="openshift-marketplace/community-operators-dsx5t" Dec 12 14:34:43 crc kubenswrapper[5108]: I1212 14:34:43.069796 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ebf140b8-eebf-48b4-bf30-e28ceb656715-utilities\") pod \"community-operators-dsx5t\" (UID: \"ebf140b8-eebf-48b4-bf30-e28ceb656715\") " pod="openshift-marketplace/community-operators-dsx5t" Dec 12 14:34:43 crc kubenswrapper[5108]: I1212 14:34:43.069863 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ktvq8\" (UniqueName: \"kubernetes.io/projected/ebf140b8-eebf-48b4-bf30-e28ceb656715-kube-api-access-ktvq8\") pod \"community-operators-dsx5t\" (UID: \"ebf140b8-eebf-48b4-bf30-e28ceb656715\") " pod="openshift-marketplace/community-operators-dsx5t" Dec 12 14:34:43 crc kubenswrapper[5108]: I1212 14:34:43.069939 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ebf140b8-eebf-48b4-bf30-e28ceb656715-catalog-content\") pod \"community-operators-dsx5t\" (UID: \"ebf140b8-eebf-48b4-bf30-e28ceb656715\") " pod="openshift-marketplace/community-operators-dsx5t" Dec 12 14:34:43 crc kubenswrapper[5108]: I1212 14:34:43.070570 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ebf140b8-eebf-48b4-bf30-e28ceb656715-catalog-content\") pod \"community-operators-dsx5t\" (UID: \"ebf140b8-eebf-48b4-bf30-e28ceb656715\") " pod="openshift-marketplace/community-operators-dsx5t" Dec 12 14:34:43 crc kubenswrapper[5108]: I1212 14:34:43.070712 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ebf140b8-eebf-48b4-bf30-e28ceb656715-utilities\") pod \"community-operators-dsx5t\" (UID: \"ebf140b8-eebf-48b4-bf30-e28ceb656715\") " pod="openshift-marketplace/community-operators-dsx5t" Dec 12 14:34:43 crc kubenswrapper[5108]: I1212 14:34:43.092251 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ktvq8\" (UniqueName: \"kubernetes.io/projected/ebf140b8-eebf-48b4-bf30-e28ceb656715-kube-api-access-ktvq8\") pod \"community-operators-dsx5t\" (UID: \"ebf140b8-eebf-48b4-bf30-e28ceb656715\") " pod="openshift-marketplace/community-operators-dsx5t" Dec 12 14:34:43 crc kubenswrapper[5108]: I1212 14:34:43.218303 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dsx5t" Dec 12 14:34:43 crc kubenswrapper[5108]: I1212 14:34:43.536735 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dsx5t"] Dec 12 14:34:44 crc kubenswrapper[5108]: I1212 14:34:44.498914 5108 generic.go:358] "Generic (PLEG): container finished" podID="ebf140b8-eebf-48b4-bf30-e28ceb656715" containerID="6a930a47b96d433f48febe82a0c29720f060b64924723f0c4677485d29364791" exitCode=0 Dec 12 14:34:44 crc kubenswrapper[5108]: I1212 14:34:44.499011 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dsx5t" event={"ID":"ebf140b8-eebf-48b4-bf30-e28ceb656715","Type":"ContainerDied","Data":"6a930a47b96d433f48febe82a0c29720f060b64924723f0c4677485d29364791"} Dec 12 14:34:44 crc kubenswrapper[5108]: I1212 14:34:44.499434 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dsx5t" event={"ID":"ebf140b8-eebf-48b4-bf30-e28ceb656715","Type":"ContainerStarted","Data":"085bfb31cc43cabf434f4d553f642bc007a9f127b0f135ce7b331586afcda397"} Dec 12 14:34:46 crc kubenswrapper[5108]: I1212 14:34:46.513577 5108 generic.go:358] "Generic (PLEG): container finished" podID="ebf140b8-eebf-48b4-bf30-e28ceb656715" containerID="3660e003e2a98a141a8a0d392ebfbdcee726d1a9010329e0e052dd84fa419a7c" exitCode=0 Dec 12 14:34:46 crc kubenswrapper[5108]: I1212 14:34:46.513648 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dsx5t" event={"ID":"ebf140b8-eebf-48b4-bf30-e28ceb656715","Type":"ContainerDied","Data":"3660e003e2a98a141a8a0d392ebfbdcee726d1a9010329e0e052dd84fa419a7c"} Dec 12 14:34:47 crc kubenswrapper[5108]: I1212 14:34:47.525597 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dsx5t" event={"ID":"ebf140b8-eebf-48b4-bf30-e28ceb656715","Type":"ContainerStarted","Data":"3d85410ae058e4a9da31ad6eefbd7b2a5e2e1dbb89cee38c70adf571ebd00505"} Dec 12 14:34:47 crc kubenswrapper[5108]: I1212 14:34:47.554998 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-dsx5t" podStartSLOduration=4.430409072 podStartE2EDuration="5.554979543s" podCreationTimestamp="2025-12-12 14:34:42 +0000 UTC" firstStartedPulling="2025-12-12 14:34:44.501002212 +0000 UTC m=+1437.408993361" lastFinishedPulling="2025-12-12 14:34:45.625572673 +0000 UTC m=+1438.533563832" observedRunningTime="2025-12-12 14:34:47.550242396 +0000 UTC m=+1440.458233555" watchObservedRunningTime="2025-12-12 14:34:47.554979543 +0000 UTC m=+1440.462970702" Dec 12 14:34:53 crc kubenswrapper[5108]: I1212 14:34:53.219431 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-dsx5t" Dec 12 14:34:53 crc kubenswrapper[5108]: I1212 14:34:53.221189 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-dsx5t" Dec 12 14:34:53 crc kubenswrapper[5108]: I1212 14:34:53.260801 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-dsx5t" Dec 12 14:34:53 crc kubenswrapper[5108]: I1212 14:34:53.613812 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-dsx5t" Dec 12 14:34:53 crc kubenswrapper[5108]: I1212 14:34:53.662814 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dsx5t"] Dec 12 14:34:56 crc kubenswrapper[5108]: I1212 14:34:56.123823 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-dsx5t" podUID="ebf140b8-eebf-48b4-bf30-e28ceb656715" containerName="registry-server" containerID="cri-o://3d85410ae058e4a9da31ad6eefbd7b2a5e2e1dbb89cee38c70adf571ebd00505" gracePeriod=2 Dec 12 14:34:57 crc kubenswrapper[5108]: I1212 14:34:57.134737 5108 generic.go:358] "Generic (PLEG): container finished" podID="ebf140b8-eebf-48b4-bf30-e28ceb656715" containerID="3d85410ae058e4a9da31ad6eefbd7b2a5e2e1dbb89cee38c70adf571ebd00505" exitCode=0 Dec 12 14:34:57 crc kubenswrapper[5108]: I1212 14:34:57.134812 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dsx5t" event={"ID":"ebf140b8-eebf-48b4-bf30-e28ceb656715","Type":"ContainerDied","Data":"3d85410ae058e4a9da31ad6eefbd7b2a5e2e1dbb89cee38c70adf571ebd00505"} Dec 12 14:34:57 crc kubenswrapper[5108]: I1212 14:34:57.605751 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dsx5t" Dec 12 14:34:57 crc kubenswrapper[5108]: I1212 14:34:57.712742 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ebf140b8-eebf-48b4-bf30-e28ceb656715-utilities\") pod \"ebf140b8-eebf-48b4-bf30-e28ceb656715\" (UID: \"ebf140b8-eebf-48b4-bf30-e28ceb656715\") " Dec 12 14:34:57 crc kubenswrapper[5108]: I1212 14:34:57.712845 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ebf140b8-eebf-48b4-bf30-e28ceb656715-catalog-content\") pod \"ebf140b8-eebf-48b4-bf30-e28ceb656715\" (UID: \"ebf140b8-eebf-48b4-bf30-e28ceb656715\") " Dec 12 14:34:57 crc kubenswrapper[5108]: I1212 14:34:57.712953 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ktvq8\" (UniqueName: \"kubernetes.io/projected/ebf140b8-eebf-48b4-bf30-e28ceb656715-kube-api-access-ktvq8\") pod \"ebf140b8-eebf-48b4-bf30-e28ceb656715\" (UID: \"ebf140b8-eebf-48b4-bf30-e28ceb656715\") " Dec 12 14:34:57 crc kubenswrapper[5108]: I1212 14:34:57.714874 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ebf140b8-eebf-48b4-bf30-e28ceb656715-utilities" (OuterVolumeSpecName: "utilities") pod "ebf140b8-eebf-48b4-bf30-e28ceb656715" (UID: "ebf140b8-eebf-48b4-bf30-e28ceb656715"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:34:57 crc kubenswrapper[5108]: I1212 14:34:57.719586 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ebf140b8-eebf-48b4-bf30-e28ceb656715-kube-api-access-ktvq8" (OuterVolumeSpecName: "kube-api-access-ktvq8") pod "ebf140b8-eebf-48b4-bf30-e28ceb656715" (UID: "ebf140b8-eebf-48b4-bf30-e28ceb656715"). InnerVolumeSpecName "kube-api-access-ktvq8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:34:57 crc kubenswrapper[5108]: I1212 14:34:57.759940 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ebf140b8-eebf-48b4-bf30-e28ceb656715-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ebf140b8-eebf-48b4-bf30-e28ceb656715" (UID: "ebf140b8-eebf-48b4-bf30-e28ceb656715"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:34:57 crc kubenswrapper[5108]: I1212 14:34:57.815082 5108 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ebf140b8-eebf-48b4-bf30-e28ceb656715-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 14:34:57 crc kubenswrapper[5108]: I1212 14:34:57.815179 5108 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ebf140b8-eebf-48b4-bf30-e28ceb656715-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 14:34:57 crc kubenswrapper[5108]: I1212 14:34:57.815194 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ktvq8\" (UniqueName: \"kubernetes.io/projected/ebf140b8-eebf-48b4-bf30-e28ceb656715-kube-api-access-ktvq8\") on node \"crc\" DevicePath \"\"" Dec 12 14:34:58 crc kubenswrapper[5108]: I1212 14:34:58.147609 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dsx5t" Dec 12 14:34:58 crc kubenswrapper[5108]: I1212 14:34:58.147616 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dsx5t" event={"ID":"ebf140b8-eebf-48b4-bf30-e28ceb656715","Type":"ContainerDied","Data":"085bfb31cc43cabf434f4d553f642bc007a9f127b0f135ce7b331586afcda397"} Dec 12 14:34:58 crc kubenswrapper[5108]: I1212 14:34:58.148216 5108 scope.go:117] "RemoveContainer" containerID="3d85410ae058e4a9da31ad6eefbd7b2a5e2e1dbb89cee38c70adf571ebd00505" Dec 12 14:34:58 crc kubenswrapper[5108]: I1212 14:34:58.182953 5108 scope.go:117] "RemoveContainer" containerID="3660e003e2a98a141a8a0d392ebfbdcee726d1a9010329e0e052dd84fa419a7c" Dec 12 14:34:58 crc kubenswrapper[5108]: I1212 14:34:58.193783 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dsx5t"] Dec 12 14:34:58 crc kubenswrapper[5108]: I1212 14:34:58.202805 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-dsx5t"] Dec 12 14:34:58 crc kubenswrapper[5108]: I1212 14:34:58.234565 5108 scope.go:117] "RemoveContainer" containerID="6a930a47b96d433f48febe82a0c29720f060b64924723f0c4677485d29364791" Dec 12 14:34:59 crc kubenswrapper[5108]: I1212 14:34:59.430765 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ebf140b8-eebf-48b4-bf30-e28ceb656715" path="/var/lib/kubelet/pods/ebf140b8-eebf-48b4-bf30-e28ceb656715/volumes" Dec 12 14:35:19 crc kubenswrapper[5108]: I1212 14:35:19.490559 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-lqgsx"] Dec 12 14:35:19 crc kubenswrapper[5108]: I1212 14:35:19.491923 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ebf140b8-eebf-48b4-bf30-e28ceb656715" containerName="extract-utilities" Dec 12 14:35:19 crc kubenswrapper[5108]: I1212 14:35:19.491943 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebf140b8-eebf-48b4-bf30-e28ceb656715" containerName="extract-utilities" Dec 12 14:35:19 crc kubenswrapper[5108]: I1212 14:35:19.491954 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ebf140b8-eebf-48b4-bf30-e28ceb656715" containerName="registry-server" Dec 12 14:35:19 crc kubenswrapper[5108]: I1212 14:35:19.491961 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebf140b8-eebf-48b4-bf30-e28ceb656715" containerName="registry-server" Dec 12 14:35:19 crc kubenswrapper[5108]: I1212 14:35:19.491980 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ebf140b8-eebf-48b4-bf30-e28ceb656715" containerName="extract-content" Dec 12 14:35:19 crc kubenswrapper[5108]: I1212 14:35:19.491985 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebf140b8-eebf-48b4-bf30-e28ceb656715" containerName="extract-content" Dec 12 14:35:19 crc kubenswrapper[5108]: I1212 14:35:19.492176 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="ebf140b8-eebf-48b4-bf30-e28ceb656715" containerName="registry-server" Dec 12 14:35:19 crc kubenswrapper[5108]: I1212 14:35:19.513420 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lqgsx"] Dec 12 14:35:19 crc kubenswrapper[5108]: I1212 14:35:19.513575 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lqgsx" Dec 12 14:35:19 crc kubenswrapper[5108]: I1212 14:35:19.573625 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cbec7565-7bef-46f1-a219-530052c9632f-catalog-content\") pod \"certified-operators-lqgsx\" (UID: \"cbec7565-7bef-46f1-a219-530052c9632f\") " pod="openshift-marketplace/certified-operators-lqgsx" Dec 12 14:35:19 crc kubenswrapper[5108]: I1212 14:35:19.574426 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vq5r\" (UniqueName: \"kubernetes.io/projected/cbec7565-7bef-46f1-a219-530052c9632f-kube-api-access-9vq5r\") pod \"certified-operators-lqgsx\" (UID: \"cbec7565-7bef-46f1-a219-530052c9632f\") " pod="openshift-marketplace/certified-operators-lqgsx" Dec 12 14:35:19 crc kubenswrapper[5108]: I1212 14:35:19.574655 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cbec7565-7bef-46f1-a219-530052c9632f-utilities\") pod \"certified-operators-lqgsx\" (UID: \"cbec7565-7bef-46f1-a219-530052c9632f\") " pod="openshift-marketplace/certified-operators-lqgsx" Dec 12 14:35:19 crc kubenswrapper[5108]: I1212 14:35:19.676100 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cbec7565-7bef-46f1-a219-530052c9632f-utilities\") pod \"certified-operators-lqgsx\" (UID: \"cbec7565-7bef-46f1-a219-530052c9632f\") " pod="openshift-marketplace/certified-operators-lqgsx" Dec 12 14:35:19 crc kubenswrapper[5108]: I1212 14:35:19.676167 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cbec7565-7bef-46f1-a219-530052c9632f-catalog-content\") pod \"certified-operators-lqgsx\" (UID: \"cbec7565-7bef-46f1-a219-530052c9632f\") " pod="openshift-marketplace/certified-operators-lqgsx" Dec 12 14:35:19 crc kubenswrapper[5108]: I1212 14:35:19.676268 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9vq5r\" (UniqueName: \"kubernetes.io/projected/cbec7565-7bef-46f1-a219-530052c9632f-kube-api-access-9vq5r\") pod \"certified-operators-lqgsx\" (UID: \"cbec7565-7bef-46f1-a219-530052c9632f\") " pod="openshift-marketplace/certified-operators-lqgsx" Dec 12 14:35:19 crc kubenswrapper[5108]: I1212 14:35:19.677002 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cbec7565-7bef-46f1-a219-530052c9632f-utilities\") pod \"certified-operators-lqgsx\" (UID: \"cbec7565-7bef-46f1-a219-530052c9632f\") " pod="openshift-marketplace/certified-operators-lqgsx" Dec 12 14:35:19 crc kubenswrapper[5108]: I1212 14:35:19.677168 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cbec7565-7bef-46f1-a219-530052c9632f-catalog-content\") pod \"certified-operators-lqgsx\" (UID: \"cbec7565-7bef-46f1-a219-530052c9632f\") " pod="openshift-marketplace/certified-operators-lqgsx" Dec 12 14:35:19 crc kubenswrapper[5108]: I1212 14:35:19.699268 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vq5r\" (UniqueName: \"kubernetes.io/projected/cbec7565-7bef-46f1-a219-530052c9632f-kube-api-access-9vq5r\") pod \"certified-operators-lqgsx\" (UID: \"cbec7565-7bef-46f1-a219-530052c9632f\") " pod="openshift-marketplace/certified-operators-lqgsx" Dec 12 14:35:19 crc kubenswrapper[5108]: I1212 14:35:19.829734 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lqgsx" Dec 12 14:35:20 crc kubenswrapper[5108]: I1212 14:35:20.108951 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lqgsx"] Dec 12 14:35:20 crc kubenswrapper[5108]: I1212 14:35:20.321818 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lqgsx" event={"ID":"cbec7565-7bef-46f1-a219-530052c9632f","Type":"ContainerStarted","Data":"e0e2ada6924f208f4182c9496be2fa68cc6c75d56fb1bedcf04e2cb9086fc9b8"} Dec 12 14:35:20 crc kubenswrapper[5108]: I1212 14:35:20.321889 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lqgsx" event={"ID":"cbec7565-7bef-46f1-a219-530052c9632f","Type":"ContainerStarted","Data":"e82ac68ba7770f2ed1836e49dc81fec79a18744eb696b5a52d6cda21a0bd2792"} Dec 12 14:35:21 crc kubenswrapper[5108]: I1212 14:35:21.333009 5108 generic.go:358] "Generic (PLEG): container finished" podID="cbec7565-7bef-46f1-a219-530052c9632f" containerID="e0e2ada6924f208f4182c9496be2fa68cc6c75d56fb1bedcf04e2cb9086fc9b8" exitCode=0 Dec 12 14:35:21 crc kubenswrapper[5108]: I1212 14:35:21.333159 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lqgsx" event={"ID":"cbec7565-7bef-46f1-a219-530052c9632f","Type":"ContainerDied","Data":"e0e2ada6924f208f4182c9496be2fa68cc6c75d56fb1bedcf04e2cb9086fc9b8"} Dec 12 14:35:22 crc kubenswrapper[5108]: I1212 14:35:22.342887 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lqgsx" event={"ID":"cbec7565-7bef-46f1-a219-530052c9632f","Type":"ContainerStarted","Data":"4241d0722b74baee4208896b5f2c0b6dd97493dc99388e682816624f87716abf"} Dec 12 14:35:23 crc kubenswrapper[5108]: I1212 14:35:23.349624 5108 generic.go:358] "Generic (PLEG): container finished" podID="cbec7565-7bef-46f1-a219-530052c9632f" containerID="4241d0722b74baee4208896b5f2c0b6dd97493dc99388e682816624f87716abf" exitCode=0 Dec 12 14:35:23 crc kubenswrapper[5108]: I1212 14:35:23.349687 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lqgsx" event={"ID":"cbec7565-7bef-46f1-a219-530052c9632f","Type":"ContainerDied","Data":"4241d0722b74baee4208896b5f2c0b6dd97493dc99388e682816624f87716abf"} Dec 12 14:35:24 crc kubenswrapper[5108]: I1212 14:35:24.378572 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lqgsx" event={"ID":"cbec7565-7bef-46f1-a219-530052c9632f","Type":"ContainerStarted","Data":"addb53e52061cac9e12ab93f6aee6179e2497ba35ed8257d93d773227041434d"} Dec 12 14:35:24 crc kubenswrapper[5108]: I1212 14:35:24.399585 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-lqgsx" podStartSLOduration=4.624153375 podStartE2EDuration="5.399567762s" podCreationTimestamp="2025-12-12 14:35:19 +0000 UTC" firstStartedPulling="2025-12-12 14:35:21.334443301 +0000 UTC m=+1474.242434480" lastFinishedPulling="2025-12-12 14:35:22.109857708 +0000 UTC m=+1475.017848867" observedRunningTime="2025-12-12 14:35:24.393660873 +0000 UTC m=+1477.301652042" watchObservedRunningTime="2025-12-12 14:35:24.399567762 +0000 UTC m=+1477.307558921" Dec 12 14:35:29 crc kubenswrapper[5108]: I1212 14:35:29.830847 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-lqgsx" Dec 12 14:35:29 crc kubenswrapper[5108]: I1212 14:35:29.831252 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-lqgsx" Dec 12 14:35:29 crc kubenswrapper[5108]: I1212 14:35:29.879225 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-lqgsx" Dec 12 14:35:30 crc kubenswrapper[5108]: I1212 14:35:30.461751 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-lqgsx" Dec 12 14:35:30 crc kubenswrapper[5108]: I1212 14:35:30.506222 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lqgsx"] Dec 12 14:35:32 crc kubenswrapper[5108]: I1212 14:35:32.430901 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-lqgsx" podUID="cbec7565-7bef-46f1-a219-530052c9632f" containerName="registry-server" containerID="cri-o://addb53e52061cac9e12ab93f6aee6179e2497ba35ed8257d93d773227041434d" gracePeriod=2 Dec 12 14:35:32 crc kubenswrapper[5108]: I1212 14:35:32.811850 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lqgsx" Dec 12 14:35:32 crc kubenswrapper[5108]: I1212 14:35:32.978698 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9vq5r\" (UniqueName: \"kubernetes.io/projected/cbec7565-7bef-46f1-a219-530052c9632f-kube-api-access-9vq5r\") pod \"cbec7565-7bef-46f1-a219-530052c9632f\" (UID: \"cbec7565-7bef-46f1-a219-530052c9632f\") " Dec 12 14:35:32 crc kubenswrapper[5108]: I1212 14:35:32.978813 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cbec7565-7bef-46f1-a219-530052c9632f-catalog-content\") pod \"cbec7565-7bef-46f1-a219-530052c9632f\" (UID: \"cbec7565-7bef-46f1-a219-530052c9632f\") " Dec 12 14:35:32 crc kubenswrapper[5108]: I1212 14:35:32.978904 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cbec7565-7bef-46f1-a219-530052c9632f-utilities\") pod \"cbec7565-7bef-46f1-a219-530052c9632f\" (UID: \"cbec7565-7bef-46f1-a219-530052c9632f\") " Dec 12 14:35:32 crc kubenswrapper[5108]: I1212 14:35:32.993382 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cbec7565-7bef-46f1-a219-530052c9632f-utilities" (OuterVolumeSpecName: "utilities") pod "cbec7565-7bef-46f1-a219-530052c9632f" (UID: "cbec7565-7bef-46f1-a219-530052c9632f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:35:32 crc kubenswrapper[5108]: I1212 14:35:32.998988 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cbec7565-7bef-46f1-a219-530052c9632f-kube-api-access-9vq5r" (OuterVolumeSpecName: "kube-api-access-9vq5r") pod "cbec7565-7bef-46f1-a219-530052c9632f" (UID: "cbec7565-7bef-46f1-a219-530052c9632f"). InnerVolumeSpecName "kube-api-access-9vq5r". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:35:33 crc kubenswrapper[5108]: I1212 14:35:33.014512 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cbec7565-7bef-46f1-a219-530052c9632f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cbec7565-7bef-46f1-a219-530052c9632f" (UID: "cbec7565-7bef-46f1-a219-530052c9632f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:35:33 crc kubenswrapper[5108]: I1212 14:35:33.081046 5108 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cbec7565-7bef-46f1-a219-530052c9632f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 14:35:33 crc kubenswrapper[5108]: I1212 14:35:33.081099 5108 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cbec7565-7bef-46f1-a219-530052c9632f-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 14:35:33 crc kubenswrapper[5108]: I1212 14:35:33.081112 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9vq5r\" (UniqueName: \"kubernetes.io/projected/cbec7565-7bef-46f1-a219-530052c9632f-kube-api-access-9vq5r\") on node \"crc\" DevicePath \"\"" Dec 12 14:35:33 crc kubenswrapper[5108]: I1212 14:35:33.441551 5108 generic.go:358] "Generic (PLEG): container finished" podID="cbec7565-7bef-46f1-a219-530052c9632f" containerID="addb53e52061cac9e12ab93f6aee6179e2497ba35ed8257d93d773227041434d" exitCode=0 Dec 12 14:35:33 crc kubenswrapper[5108]: I1212 14:35:33.441665 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lqgsx" Dec 12 14:35:33 crc kubenswrapper[5108]: I1212 14:35:33.441646 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lqgsx" event={"ID":"cbec7565-7bef-46f1-a219-530052c9632f","Type":"ContainerDied","Data":"addb53e52061cac9e12ab93f6aee6179e2497ba35ed8257d93d773227041434d"} Dec 12 14:35:33 crc kubenswrapper[5108]: I1212 14:35:33.441825 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lqgsx" event={"ID":"cbec7565-7bef-46f1-a219-530052c9632f","Type":"ContainerDied","Data":"e82ac68ba7770f2ed1836e49dc81fec79a18744eb696b5a52d6cda21a0bd2792"} Dec 12 14:35:33 crc kubenswrapper[5108]: I1212 14:35:33.441856 5108 scope.go:117] "RemoveContainer" containerID="addb53e52061cac9e12ab93f6aee6179e2497ba35ed8257d93d773227041434d" Dec 12 14:35:33 crc kubenswrapper[5108]: I1212 14:35:33.470304 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lqgsx"] Dec 12 14:35:33 crc kubenswrapper[5108]: I1212 14:35:33.474235 5108 scope.go:117] "RemoveContainer" containerID="4241d0722b74baee4208896b5f2c0b6dd97493dc99388e682816624f87716abf" Dec 12 14:35:33 crc kubenswrapper[5108]: I1212 14:35:33.475469 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-lqgsx"] Dec 12 14:35:33 crc kubenswrapper[5108]: I1212 14:35:33.493043 5108 scope.go:117] "RemoveContainer" containerID="e0e2ada6924f208f4182c9496be2fa68cc6c75d56fb1bedcf04e2cb9086fc9b8" Dec 12 14:35:33 crc kubenswrapper[5108]: I1212 14:35:33.516642 5108 scope.go:117] "RemoveContainer" containerID="addb53e52061cac9e12ab93f6aee6179e2497ba35ed8257d93d773227041434d" Dec 12 14:35:33 crc kubenswrapper[5108]: E1212 14:35:33.517005 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"addb53e52061cac9e12ab93f6aee6179e2497ba35ed8257d93d773227041434d\": container with ID starting with addb53e52061cac9e12ab93f6aee6179e2497ba35ed8257d93d773227041434d not found: ID does not exist" containerID="addb53e52061cac9e12ab93f6aee6179e2497ba35ed8257d93d773227041434d" Dec 12 14:35:33 crc kubenswrapper[5108]: I1212 14:35:33.517043 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"addb53e52061cac9e12ab93f6aee6179e2497ba35ed8257d93d773227041434d"} err="failed to get container status \"addb53e52061cac9e12ab93f6aee6179e2497ba35ed8257d93d773227041434d\": rpc error: code = NotFound desc = could not find container \"addb53e52061cac9e12ab93f6aee6179e2497ba35ed8257d93d773227041434d\": container with ID starting with addb53e52061cac9e12ab93f6aee6179e2497ba35ed8257d93d773227041434d not found: ID does not exist" Dec 12 14:35:33 crc kubenswrapper[5108]: I1212 14:35:33.517068 5108 scope.go:117] "RemoveContainer" containerID="4241d0722b74baee4208896b5f2c0b6dd97493dc99388e682816624f87716abf" Dec 12 14:35:33 crc kubenswrapper[5108]: E1212 14:35:33.517353 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4241d0722b74baee4208896b5f2c0b6dd97493dc99388e682816624f87716abf\": container with ID starting with 4241d0722b74baee4208896b5f2c0b6dd97493dc99388e682816624f87716abf not found: ID does not exist" containerID="4241d0722b74baee4208896b5f2c0b6dd97493dc99388e682816624f87716abf" Dec 12 14:35:33 crc kubenswrapper[5108]: I1212 14:35:33.517375 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4241d0722b74baee4208896b5f2c0b6dd97493dc99388e682816624f87716abf"} err="failed to get container status \"4241d0722b74baee4208896b5f2c0b6dd97493dc99388e682816624f87716abf\": rpc error: code = NotFound desc = could not find container \"4241d0722b74baee4208896b5f2c0b6dd97493dc99388e682816624f87716abf\": container with ID starting with 4241d0722b74baee4208896b5f2c0b6dd97493dc99388e682816624f87716abf not found: ID does not exist" Dec 12 14:35:33 crc kubenswrapper[5108]: I1212 14:35:33.517396 5108 scope.go:117] "RemoveContainer" containerID="e0e2ada6924f208f4182c9496be2fa68cc6c75d56fb1bedcf04e2cb9086fc9b8" Dec 12 14:35:33 crc kubenswrapper[5108]: E1212 14:35:33.517658 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e0e2ada6924f208f4182c9496be2fa68cc6c75d56fb1bedcf04e2cb9086fc9b8\": container with ID starting with e0e2ada6924f208f4182c9496be2fa68cc6c75d56fb1bedcf04e2cb9086fc9b8 not found: ID does not exist" containerID="e0e2ada6924f208f4182c9496be2fa68cc6c75d56fb1bedcf04e2cb9086fc9b8" Dec 12 14:35:33 crc kubenswrapper[5108]: I1212 14:35:33.517680 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0e2ada6924f208f4182c9496be2fa68cc6c75d56fb1bedcf04e2cb9086fc9b8"} err="failed to get container status \"e0e2ada6924f208f4182c9496be2fa68cc6c75d56fb1bedcf04e2cb9086fc9b8\": rpc error: code = NotFound desc = could not find container \"e0e2ada6924f208f4182c9496be2fa68cc6c75d56fb1bedcf04e2cb9086fc9b8\": container with ID starting with e0e2ada6924f208f4182c9496be2fa68cc6c75d56fb1bedcf04e2cb9086fc9b8 not found: ID does not exist" Dec 12 14:35:35 crc kubenswrapper[5108]: I1212 14:35:35.419559 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cbec7565-7bef-46f1-a219-530052c9632f" path="/var/lib/kubelet/pods/cbec7565-7bef-46f1-a219-530052c9632f/volumes" Dec 12 14:35:39 crc kubenswrapper[5108]: I1212 14:35:39.521961 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/infrawatch-operators-dtdms"] Dec 12 14:35:39 crc kubenswrapper[5108]: I1212 14:35:39.523491 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="cbec7565-7bef-46f1-a219-530052c9632f" containerName="extract-content" Dec 12 14:35:39 crc kubenswrapper[5108]: I1212 14:35:39.523513 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbec7565-7bef-46f1-a219-530052c9632f" containerName="extract-content" Dec 12 14:35:39 crc kubenswrapper[5108]: I1212 14:35:39.523528 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="cbec7565-7bef-46f1-a219-530052c9632f" containerName="registry-server" Dec 12 14:35:39 crc kubenswrapper[5108]: I1212 14:35:39.523536 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbec7565-7bef-46f1-a219-530052c9632f" containerName="registry-server" Dec 12 14:35:39 crc kubenswrapper[5108]: I1212 14:35:39.523594 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="cbec7565-7bef-46f1-a219-530052c9632f" containerName="extract-utilities" Dec 12 14:35:39 crc kubenswrapper[5108]: I1212 14:35:39.523602 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbec7565-7bef-46f1-a219-530052c9632f" containerName="extract-utilities" Dec 12 14:35:39 crc kubenswrapper[5108]: I1212 14:35:39.523766 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="cbec7565-7bef-46f1-a219-530052c9632f" containerName="registry-server" Dec 12 14:35:39 crc kubenswrapper[5108]: I1212 14:35:39.535033 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-dtdms"] Dec 12 14:35:39 crc kubenswrapper[5108]: I1212 14:35:39.535213 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-dtdms" Dec 12 14:35:39 crc kubenswrapper[5108]: I1212 14:35:39.571373 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nzwd\" (UniqueName: \"kubernetes.io/projected/97729623-326c-4667-b99c-6135bcbacda8-kube-api-access-7nzwd\") pod \"infrawatch-operators-dtdms\" (UID: \"97729623-326c-4667-b99c-6135bcbacda8\") " pod="service-telemetry/infrawatch-operators-dtdms" Dec 12 14:35:39 crc kubenswrapper[5108]: I1212 14:35:39.672999 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7nzwd\" (UniqueName: \"kubernetes.io/projected/97729623-326c-4667-b99c-6135bcbacda8-kube-api-access-7nzwd\") pod \"infrawatch-operators-dtdms\" (UID: \"97729623-326c-4667-b99c-6135bcbacda8\") " pod="service-telemetry/infrawatch-operators-dtdms" Dec 12 14:35:39 crc kubenswrapper[5108]: I1212 14:35:39.698322 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7nzwd\" (UniqueName: \"kubernetes.io/projected/97729623-326c-4667-b99c-6135bcbacda8-kube-api-access-7nzwd\") pod \"infrawatch-operators-dtdms\" (UID: \"97729623-326c-4667-b99c-6135bcbacda8\") " pod="service-telemetry/infrawatch-operators-dtdms" Dec 12 14:35:39 crc kubenswrapper[5108]: I1212 14:35:39.855292 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-dtdms" Dec 12 14:35:40 crc kubenswrapper[5108]: I1212 14:35:40.273223 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-dtdms"] Dec 12 14:35:40 crc kubenswrapper[5108]: I1212 14:35:40.496842 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-dtdms" event={"ID":"97729623-326c-4667-b99c-6135bcbacda8","Type":"ContainerStarted","Data":"820d69d02e047c99dfa4a7c74e7bab12d3b846d77551bb51fbf98ca7de866f9d"} Dec 12 14:35:41 crc kubenswrapper[5108]: I1212 14:35:41.507397 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-dtdms" event={"ID":"97729623-326c-4667-b99c-6135bcbacda8","Type":"ContainerStarted","Data":"358118ef203caab2580dca37b538b4dcebc15dae739a45de6230f9d7924f79cc"} Dec 12 14:35:41 crc kubenswrapper[5108]: I1212 14:35:41.524171 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/infrawatch-operators-dtdms" podStartSLOduration=1.895056851 podStartE2EDuration="2.524150269s" podCreationTimestamp="2025-12-12 14:35:39 +0000 UTC" firstStartedPulling="2025-12-12 14:35:40.282442098 +0000 UTC m=+1493.190433257" lastFinishedPulling="2025-12-12 14:35:40.911535506 +0000 UTC m=+1493.819526675" observedRunningTime="2025-12-12 14:35:41.520736407 +0000 UTC m=+1494.428727576" watchObservedRunningTime="2025-12-12 14:35:41.524150269 +0000 UTC m=+1494.432141428" Dec 12 14:35:48 crc kubenswrapper[5108]: I1212 14:35:48.119281 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ztpws_1e8c3045-7200-4b39-9531-5ce86ab0b5b5/kube-multus/0.log" Dec 12 14:35:48 crc kubenswrapper[5108]: I1212 14:35:48.119354 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ztpws_1e8c3045-7200-4b39-9531-5ce86ab0b5b5/kube-multus/0.log" Dec 12 14:35:48 crc kubenswrapper[5108]: I1212 14:35:48.132017 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Dec 12 14:35:48 crc kubenswrapper[5108]: I1212 14:35:48.132405 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Dec 12 14:35:49 crc kubenswrapper[5108]: I1212 14:35:49.855967 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="service-telemetry/infrawatch-operators-dtdms" Dec 12 14:35:49 crc kubenswrapper[5108]: I1212 14:35:49.857251 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="service-telemetry/infrawatch-operators-dtdms" Dec 12 14:35:49 crc kubenswrapper[5108]: I1212 14:35:49.886298 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="service-telemetry/infrawatch-operators-dtdms" Dec 12 14:35:49 crc kubenswrapper[5108]: I1212 14:35:49.985752 5108 patch_prober.go:28] interesting pod/machine-config-daemon-w294k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 14:35:49 crc kubenswrapper[5108]: I1212 14:35:49.985838 5108 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w294k" podUID="fcb30c12-8b29-461d-ab3e-a76577b664d6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 14:35:50 crc kubenswrapper[5108]: I1212 14:35:50.595537 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/infrawatch-operators-dtdms" Dec 12 14:35:51 crc kubenswrapper[5108]: I1212 14:35:51.291656 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-dtdms"] Dec 12 14:35:52 crc kubenswrapper[5108]: I1212 14:35:52.583799 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/infrawatch-operators-dtdms" podUID="97729623-326c-4667-b99c-6135bcbacda8" containerName="registry-server" containerID="cri-o://358118ef203caab2580dca37b538b4dcebc15dae739a45de6230f9d7924f79cc" gracePeriod=2 Dec 12 14:35:53 crc kubenswrapper[5108]: I1212 14:35:53.063685 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-dtdms" Dec 12 14:35:53 crc kubenswrapper[5108]: I1212 14:35:53.175062 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7nzwd\" (UniqueName: \"kubernetes.io/projected/97729623-326c-4667-b99c-6135bcbacda8-kube-api-access-7nzwd\") pod \"97729623-326c-4667-b99c-6135bcbacda8\" (UID: \"97729623-326c-4667-b99c-6135bcbacda8\") " Dec 12 14:35:53 crc kubenswrapper[5108]: I1212 14:35:53.181260 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97729623-326c-4667-b99c-6135bcbacda8-kube-api-access-7nzwd" (OuterVolumeSpecName: "kube-api-access-7nzwd") pod "97729623-326c-4667-b99c-6135bcbacda8" (UID: "97729623-326c-4667-b99c-6135bcbacda8"). InnerVolumeSpecName "kube-api-access-7nzwd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:35:53 crc kubenswrapper[5108]: I1212 14:35:53.276972 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7nzwd\" (UniqueName: \"kubernetes.io/projected/97729623-326c-4667-b99c-6135bcbacda8-kube-api-access-7nzwd\") on node \"crc\" DevicePath \"\"" Dec 12 14:35:53 crc kubenswrapper[5108]: I1212 14:35:53.592781 5108 generic.go:358] "Generic (PLEG): container finished" podID="97729623-326c-4667-b99c-6135bcbacda8" containerID="358118ef203caab2580dca37b538b4dcebc15dae739a45de6230f9d7924f79cc" exitCode=0 Dec 12 14:35:53 crc kubenswrapper[5108]: I1212 14:35:53.592910 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-dtdms" event={"ID":"97729623-326c-4667-b99c-6135bcbacda8","Type":"ContainerDied","Data":"358118ef203caab2580dca37b538b4dcebc15dae739a45de6230f9d7924f79cc"} Dec 12 14:35:53 crc kubenswrapper[5108]: I1212 14:35:53.592939 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-dtdms" event={"ID":"97729623-326c-4667-b99c-6135bcbacda8","Type":"ContainerDied","Data":"820d69d02e047c99dfa4a7c74e7bab12d3b846d77551bb51fbf98ca7de866f9d"} Dec 12 14:35:53 crc kubenswrapper[5108]: I1212 14:35:53.592958 5108 scope.go:117] "RemoveContainer" containerID="358118ef203caab2580dca37b538b4dcebc15dae739a45de6230f9d7924f79cc" Dec 12 14:35:53 crc kubenswrapper[5108]: I1212 14:35:53.593181 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-dtdms" Dec 12 14:35:53 crc kubenswrapper[5108]: I1212 14:35:53.615706 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-dtdms"] Dec 12 14:35:53 crc kubenswrapper[5108]: I1212 14:35:53.620430 5108 scope.go:117] "RemoveContainer" containerID="358118ef203caab2580dca37b538b4dcebc15dae739a45de6230f9d7924f79cc" Dec 12 14:35:53 crc kubenswrapper[5108]: E1212 14:35:53.621008 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"358118ef203caab2580dca37b538b4dcebc15dae739a45de6230f9d7924f79cc\": container with ID starting with 358118ef203caab2580dca37b538b4dcebc15dae739a45de6230f9d7924f79cc not found: ID does not exist" containerID="358118ef203caab2580dca37b538b4dcebc15dae739a45de6230f9d7924f79cc" Dec 12 14:35:53 crc kubenswrapper[5108]: I1212 14:35:53.621057 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"358118ef203caab2580dca37b538b4dcebc15dae739a45de6230f9d7924f79cc"} err="failed to get container status \"358118ef203caab2580dca37b538b4dcebc15dae739a45de6230f9d7924f79cc\": rpc error: code = NotFound desc = could not find container \"358118ef203caab2580dca37b538b4dcebc15dae739a45de6230f9d7924f79cc\": container with ID starting with 358118ef203caab2580dca37b538b4dcebc15dae739a45de6230f9d7924f79cc not found: ID does not exist" Dec 12 14:35:53 crc kubenswrapper[5108]: I1212 14:35:53.621346 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/infrawatch-operators-dtdms"] Dec 12 14:35:55 crc kubenswrapper[5108]: I1212 14:35:55.416265 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97729623-326c-4667-b99c-6135bcbacda8" path="/var/lib/kubelet/pods/97729623-326c-4667-b99c-6135bcbacda8/volumes" var/home/core/zuul-output/logs/crc-cloud-workdir-crc-all-logs.tar.gz0000644000175000000000000000005515117024136024445 0ustar coreroot‹íÁ  ÷Om7 €7šÞ'(var/home/core/zuul-output/logs/crc-cloud/0000755000175000000000000000000015117024137017363 5ustar corerootvar/home/core/zuul-output/artifacts/0000755000175000017500000000000015117020554016505 5ustar corecorevar/home/core/zuul-output/docs/0000755000175000017500000000000015117020554015455 5ustar corecore