var/home/core/zuul-output/0000755000175000017500000000000015115606400014523 5ustar corecorevar/home/core/zuul-output/logs/0000755000175000017500000000000015115611562015474 5ustar corecorevar/home/core/zuul-output/logs/kubelet.log0000644000000000000000001463565615115611553017720 0ustar rootrootDec 08 17:42:23 crc systemd[1]: Starting Kubernetes Kubelet... Dec 08 17:42:23 crc kubenswrapper[5118]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 08 17:42:23 crc kubenswrapper[5118]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Dec 08 17:42:23 crc kubenswrapper[5118]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 08 17:42:23 crc kubenswrapper[5118]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 08 17:42:23 crc kubenswrapper[5118]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 08 17:42:23 crc kubenswrapper[5118]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.223822 5118 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.226695 5118 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.226714 5118 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.226720 5118 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.226724 5118 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.226729 5118 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.226735 5118 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.226739 5118 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.226746 5118 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.226751 5118 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.226756 5118 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.226761 5118 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.226767 5118 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.226772 5118 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.226776 5118 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.226780 5118 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.226786 5118 feature_gate.go:328] unrecognized feature gate: InsightsConfig Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.226790 5118 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.226794 5118 feature_gate.go:328] unrecognized feature gate: NewOLM Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.226799 5118 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.226803 5118 feature_gate.go:328] unrecognized feature gate: Example Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.226807 5118 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.226812 5118 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.226816 5118 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.226820 5118 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.226825 5118 feature_gate.go:328] unrecognized feature gate: PinnedImages Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.226829 5118 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.226834 5118 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.226838 5118 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.226842 5118 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.226847 5118 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.226851 5118 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.226855 5118 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.226861 5118 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.226865 5118 feature_gate.go:328] unrecognized feature gate: OVNObservability Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.226870 5118 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.226889 5118 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.226894 5118 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.226898 5118 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.226903 5118 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.226908 5118 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.226912 5118 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.226916 5118 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.226921 5118 feature_gate.go:328] unrecognized feature gate: GatewayAPI Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.226925 5118 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.226932 5118 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.226938 5118 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.226943 5118 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.226948 5118 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.226952 5118 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.226957 5118 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.226961 5118 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.226965 5118 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.226970 5118 feature_gate.go:328] unrecognized feature gate: DualReplica Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.226974 5118 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.226978 5118 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.226985 5118 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.226991 5118 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.226996 5118 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.227000 5118 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.227005 5118 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.227009 5118 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.227013 5118 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.227018 5118 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.227021 5118 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.227025 5118 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.227030 5118 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.227034 5118 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.227039 5118 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.227043 5118 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.227047 5118 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.227052 5118 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.227057 5118 feature_gate.go:328] unrecognized feature gate: Example2 Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.227061 5118 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.227065 5118 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.227069 5118 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.227073 5118 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.227078 5118 feature_gate.go:328] unrecognized feature gate: SignatureStores Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.227082 5118 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.227087 5118 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.227091 5118 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.227095 5118 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.227119 5118 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.227126 5118 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.227131 5118 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.227135 5118 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.227140 5118 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.227814 5118 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.227825 5118 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.227829 5118 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.227834 5118 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.227840 5118 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.227844 5118 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.227849 5118 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.227853 5118 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.227858 5118 feature_gate.go:328] unrecognized feature gate: NewOLM Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.227862 5118 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.227866 5118 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.227899 5118 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.227903 5118 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.227908 5118 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.227912 5118 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.227916 5118 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.227921 5118 feature_gate.go:328] unrecognized feature gate: Example Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.227926 5118 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.227930 5118 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.227934 5118 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.227939 5118 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.227943 5118 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.227948 5118 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.227952 5118 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.227956 5118 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.227960 5118 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.227964 5118 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.227968 5118 feature_gate.go:328] unrecognized feature gate: GatewayAPI Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.227973 5118 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.227977 5118 feature_gate.go:328] unrecognized feature gate: DualReplica Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.227981 5118 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.227985 5118 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.227989 5118 feature_gate.go:328] unrecognized feature gate: Example2 Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.227995 5118 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.228000 5118 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.228005 5118 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.228010 5118 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.228014 5118 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.228019 5118 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.228023 5118 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.228028 5118 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.228034 5118 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.228038 5118 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.228042 5118 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.228048 5118 feature_gate.go:328] unrecognized feature gate: InsightsConfig Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.228052 5118 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.228056 5118 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.228060 5118 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.228064 5118 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.228069 5118 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.228073 5118 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.228076 5118 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.228080 5118 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.228085 5118 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.228090 5118 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.228094 5118 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.228098 5118 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.228103 5118 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.228106 5118 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.228111 5118 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.228115 5118 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.228119 5118 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.228123 5118 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.228127 5118 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.228131 5118 feature_gate.go:328] unrecognized feature gate: OVNObservability Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.228135 5118 feature_gate.go:328] unrecognized feature gate: PinnedImages Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.228139 5118 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.228144 5118 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.228148 5118 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.228152 5118 feature_gate.go:328] unrecognized feature gate: SignatureStores Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.228156 5118 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.228161 5118 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.228164 5118 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.228168 5118 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.228172 5118 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.228176 5118 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.228180 5118 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.228186 5118 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.228190 5118 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.228193 5118 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.228204 5118 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.228209 5118 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.228215 5118 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.228219 5118 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.228223 5118 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.228227 5118 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.228318 5118 flags.go:64] FLAG: --address="0.0.0.0" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.228328 5118 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.228336 5118 flags.go:64] FLAG: --anonymous-auth="true" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.228343 5118 flags.go:64] FLAG: --application-metrics-count-limit="100" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.228356 5118 flags.go:64] FLAG: --authentication-token-webhook="false" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.228362 5118 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.228368 5118 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.228375 5118 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.228380 5118 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.228410 5118 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.228416 5118 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.228421 5118 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.228426 5118 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.228430 5118 flags.go:64] FLAG: --cgroup-root="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.228435 5118 flags.go:64] FLAG: --cgroups-per-qos="true" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.228439 5118 flags.go:64] FLAG: --client-ca-file="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.228444 5118 flags.go:64] FLAG: --cloud-config="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.228449 5118 flags.go:64] FLAG: --cloud-provider="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.228453 5118 flags.go:64] FLAG: --cluster-dns="[]" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.228461 5118 flags.go:64] FLAG: --cluster-domain="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.228465 5118 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.228470 5118 flags.go:64] FLAG: --config-dir="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.228475 5118 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.228480 5118 flags.go:64] FLAG: --container-log-max-files="5" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.228487 5118 flags.go:64] FLAG: --container-log-max-size="10Mi" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.228492 5118 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.228500 5118 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.228505 5118 flags.go:64] FLAG: --containerd-namespace="k8s.io" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.228510 5118 flags.go:64] FLAG: --contention-profiling="false" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.228514 5118 flags.go:64] FLAG: --cpu-cfs-quota="true" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.228519 5118 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.228523 5118 flags.go:64] FLAG: --cpu-manager-policy="none" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.228531 5118 flags.go:64] FLAG: --cpu-manager-policy-options="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.228538 5118 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.228542 5118 flags.go:64] FLAG: --enable-controller-attach-detach="true" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.228547 5118 flags.go:64] FLAG: --enable-debugging-handlers="true" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.228552 5118 flags.go:64] FLAG: --enable-load-reader="false" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.228557 5118 flags.go:64] FLAG: --enable-server="true" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.228562 5118 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.228568 5118 flags.go:64] FLAG: --event-burst="100" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.228572 5118 flags.go:64] FLAG: --event-qps="50" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.228577 5118 flags.go:64] FLAG: --event-storage-age-limit="default=0" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.228582 5118 flags.go:64] FLAG: --event-storage-event-limit="default=0" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.228587 5118 flags.go:64] FLAG: --eviction-hard="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.228592 5118 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.228597 5118 flags.go:64] FLAG: --eviction-minimum-reclaim="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.228601 5118 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.228606 5118 flags.go:64] FLAG: --eviction-soft="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.228611 5118 flags.go:64] FLAG: --eviction-soft-grace-period="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.228616 5118 flags.go:64] FLAG: --exit-on-lock-contention="false" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.228620 5118 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.228625 5118 flags.go:64] FLAG: --experimental-mounter-path="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.228629 5118 flags.go:64] FLAG: --fail-cgroupv1="false" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.228634 5118 flags.go:64] FLAG: --fail-swap-on="true" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.228638 5118 flags.go:64] FLAG: --feature-gates="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.228644 5118 flags.go:64] FLAG: --file-check-frequency="20s" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.228649 5118 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.228659 5118 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.228666 5118 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.228671 5118 flags.go:64] FLAG: --healthz-port="10248" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.228676 5118 flags.go:64] FLAG: --help="false" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.228681 5118 flags.go:64] FLAG: --hostname-override="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.228685 5118 flags.go:64] FLAG: --housekeeping-interval="10s" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.228690 5118 flags.go:64] FLAG: --http-check-frequency="20s" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.228698 5118 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.228703 5118 flags.go:64] FLAG: --image-credential-provider-config="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.228707 5118 flags.go:64] FLAG: --image-gc-high-threshold="85" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.228712 5118 flags.go:64] FLAG: --image-gc-low-threshold="80" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.228716 5118 flags.go:64] FLAG: --image-service-endpoint="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.228720 5118 flags.go:64] FLAG: --kernel-memcg-notification="false" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.228725 5118 flags.go:64] FLAG: --kube-api-burst="100" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.228729 5118 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.228734 5118 flags.go:64] FLAG: --kube-api-qps="50" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.228739 5118 flags.go:64] FLAG: --kube-reserved="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.228743 5118 flags.go:64] FLAG: --kube-reserved-cgroup="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.228748 5118 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.228753 5118 flags.go:64] FLAG: --kubelet-cgroups="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.228757 5118 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.228761 5118 flags.go:64] FLAG: --lock-file="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.228766 5118 flags.go:64] FLAG: --log-cadvisor-usage="false" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.228770 5118 flags.go:64] FLAG: --log-flush-frequency="5s" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.228775 5118 flags.go:64] FLAG: --log-json-info-buffer-size="0" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.228782 5118 flags.go:64] FLAG: --log-json-split-stream="false" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.228787 5118 flags.go:64] FLAG: --log-text-info-buffer-size="0" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.228791 5118 flags.go:64] FLAG: --log-text-split-stream="false" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.228796 5118 flags.go:64] FLAG: --logging-format="text" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.228801 5118 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.228807 5118 flags.go:64] FLAG: --make-iptables-util-chains="true" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.228812 5118 flags.go:64] FLAG: --manifest-url="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.228817 5118 flags.go:64] FLAG: --manifest-url-header="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.228827 5118 flags.go:64] FLAG: --max-housekeeping-interval="15s" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.228832 5118 flags.go:64] FLAG: --max-open-files="1000000" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.228839 5118 flags.go:64] FLAG: --max-pods="110" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.228844 5118 flags.go:64] FLAG: --maximum-dead-containers="-1" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.228850 5118 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.228854 5118 flags.go:64] FLAG: --memory-manager-policy="None" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.228862 5118 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.228866 5118 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.228886 5118 flags.go:64] FLAG: --node-ip="192.168.126.11" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.228892 5118 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhel" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.228905 5118 flags.go:64] FLAG: --node-status-max-images="50" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.228909 5118 flags.go:64] FLAG: --node-status-update-frequency="10s" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.228913 5118 flags.go:64] FLAG: --oom-score-adj="-999" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.228918 5118 flags.go:64] FLAG: --pod-cidr="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.228922 5118 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc2b30e70040205c2536d01ae5c850be1ed2d775cf13249e50328e5085777977" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.228931 5118 flags.go:64] FLAG: --pod-manifest-path="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.228935 5118 flags.go:64] FLAG: --pod-max-pids="-1" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.228940 5118 flags.go:64] FLAG: --pods-per-core="0" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.228944 5118 flags.go:64] FLAG: --port="10250" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.228949 5118 flags.go:64] FLAG: --protect-kernel-defaults="false" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.228953 5118 flags.go:64] FLAG: --provider-id="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.228957 5118 flags.go:64] FLAG: --qos-reserved="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.228962 5118 flags.go:64] FLAG: --read-only-port="10255" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.228966 5118 flags.go:64] FLAG: --register-node="true" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.228971 5118 flags.go:64] FLAG: --register-schedulable="true" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.228976 5118 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.228986 5118 flags.go:64] FLAG: --registry-burst="10" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.228991 5118 flags.go:64] FLAG: --registry-qps="5" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.228995 5118 flags.go:64] FLAG: --reserved-cpus="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.228999 5118 flags.go:64] FLAG: --reserved-memory="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.229005 5118 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.229010 5118 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.229014 5118 flags.go:64] FLAG: --rotate-certificates="false" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.229020 5118 flags.go:64] FLAG: --rotate-server-certificates="false" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.229025 5118 flags.go:64] FLAG: --runonce="false" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.229029 5118 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.229034 5118 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.229039 5118 flags.go:64] FLAG: --seccomp-default="false" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.229043 5118 flags.go:64] FLAG: --serialize-image-pulls="true" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.229052 5118 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.229057 5118 flags.go:64] FLAG: --storage-driver-db="cadvisor" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.229062 5118 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.229066 5118 flags.go:64] FLAG: --storage-driver-password="root" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.229071 5118 flags.go:64] FLAG: --storage-driver-secure="false" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.229075 5118 flags.go:64] FLAG: --storage-driver-table="stats" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.229080 5118 flags.go:64] FLAG: --storage-driver-user="root" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.229084 5118 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.229089 5118 flags.go:64] FLAG: --sync-frequency="1m0s" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.229093 5118 flags.go:64] FLAG: --system-cgroups="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.229097 5118 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.229105 5118 flags.go:64] FLAG: --system-reserved-cgroup="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.229109 5118 flags.go:64] FLAG: --tls-cert-file="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.229113 5118 flags.go:64] FLAG: --tls-cipher-suites="[]" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.229120 5118 flags.go:64] FLAG: --tls-min-version="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.229124 5118 flags.go:64] FLAG: --tls-private-key-file="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.229128 5118 flags.go:64] FLAG: --topology-manager-policy="none" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.229133 5118 flags.go:64] FLAG: --topology-manager-policy-options="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.229137 5118 flags.go:64] FLAG: --topology-manager-scope="container" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.229142 5118 flags.go:64] FLAG: --v="2" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.229149 5118 flags.go:64] FLAG: --version="false" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.229155 5118 flags.go:64] FLAG: --vmodule="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.229162 5118 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.229167 5118 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.229291 5118 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.229298 5118 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.229303 5118 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.229308 5118 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.229313 5118 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.229317 5118 feature_gate.go:328] unrecognized feature gate: OVNObservability Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.229321 5118 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.229325 5118 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.229332 5118 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.229336 5118 feature_gate.go:328] unrecognized feature gate: DualReplica Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.229340 5118 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.229344 5118 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.229348 5118 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.229352 5118 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.229357 5118 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.229361 5118 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.229365 5118 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.229369 5118 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.229373 5118 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.229377 5118 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.229381 5118 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.229385 5118 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.229390 5118 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.229394 5118 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.229398 5118 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.229402 5118 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.229406 5118 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.229411 5118 feature_gate.go:328] unrecognized feature gate: GatewayAPI Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.229415 5118 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.229419 5118 feature_gate.go:328] unrecognized feature gate: PinnedImages Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.229423 5118 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.229427 5118 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.229431 5118 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.229435 5118 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.229439 5118 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.229443 5118 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.229448 5118 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.229452 5118 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.229456 5118 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.229460 5118 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.229466 5118 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.229471 5118 feature_gate.go:328] unrecognized feature gate: NewOLM Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.229475 5118 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.229479 5118 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.229483 5118 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.229487 5118 feature_gate.go:328] unrecognized feature gate: SignatureStores Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.229491 5118 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.229496 5118 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.229500 5118 feature_gate.go:328] unrecognized feature gate: Example2 Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.229504 5118 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.229508 5118 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.229512 5118 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.229516 5118 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.229520 5118 feature_gate.go:328] unrecognized feature gate: Example Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.229524 5118 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.229528 5118 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.229532 5118 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.229536 5118 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.229540 5118 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.229547 5118 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.229551 5118 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.229556 5118 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.229560 5118 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.229564 5118 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.229568 5118 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.229572 5118 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.229576 5118 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.229580 5118 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.229584 5118 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.229589 5118 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.229595 5118 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.229600 5118 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.229607 5118 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.229611 5118 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.229615 5118 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.229621 5118 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.229626 5118 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.229631 5118 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.229636 5118 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.229640 5118 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.229644 5118 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.229648 5118 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.229653 5118 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.229657 5118 feature_gate.go:328] unrecognized feature gate: InsightsConfig Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.229661 5118 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.229665 5118 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.229797 5118 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.244958 5118 server.go:530] "Kubelet version" kubeletVersion="v1.33.5" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.244987 5118 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.245051 5118 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.245057 5118 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.245064 5118 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.245067 5118 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.245071 5118 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.245075 5118 feature_gate.go:328] unrecognized feature gate: SignatureStores Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.245078 5118 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.245081 5118 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.245084 5118 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.245088 5118 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.245091 5118 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.245094 5118 feature_gate.go:328] unrecognized feature gate: Example2 Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.245099 5118 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.245105 5118 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.245110 5118 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.245115 5118 feature_gate.go:328] unrecognized feature gate: InsightsConfig Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.245121 5118 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.245125 5118 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.245129 5118 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.245133 5118 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.245137 5118 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.245142 5118 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.245146 5118 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.245149 5118 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.245153 5118 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.245156 5118 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.245159 5118 feature_gate.go:328] unrecognized feature gate: GatewayAPI Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.245162 5118 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.245166 5118 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.245169 5118 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.245174 5118 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.245178 5118 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.245184 5118 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.245189 5118 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.245193 5118 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.245229 5118 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.245236 5118 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.245240 5118 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.245244 5118 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.245248 5118 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.245252 5118 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.245257 5118 feature_gate.go:328] unrecognized feature gate: DualReplica Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.245261 5118 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.245265 5118 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.245271 5118 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.245274 5118 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.245278 5118 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.245282 5118 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.245286 5118 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.245291 5118 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.245294 5118 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.245298 5118 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.245302 5118 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.245306 5118 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.245310 5118 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.245314 5118 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.245318 5118 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.245321 5118 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.245325 5118 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.245328 5118 feature_gate.go:328] unrecognized feature gate: Example Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.245332 5118 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.245335 5118 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.245338 5118 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.245342 5118 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.245345 5118 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.245349 5118 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.245353 5118 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.245357 5118 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.245361 5118 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.245365 5118 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.245368 5118 feature_gate.go:328] unrecognized feature gate: PinnedImages Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.245372 5118 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.245376 5118 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.245380 5118 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.245383 5118 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.245387 5118 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.245392 5118 feature_gate.go:328] unrecognized feature gate: OVNObservability Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.245396 5118 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.245400 5118 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.245403 5118 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.245406 5118 feature_gate.go:328] unrecognized feature gate: NewOLM Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.245410 5118 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.245413 5118 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.245416 5118 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.245419 5118 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.245423 5118 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.245430 5118 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.245546 5118 feature_gate.go:328] unrecognized feature gate: Example Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.245553 5118 feature_gate.go:328] unrecognized feature gate: SignatureStores Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.245557 5118 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.245561 5118 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.245565 5118 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.245568 5118 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.245572 5118 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.245575 5118 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.245579 5118 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.245582 5118 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.245585 5118 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.245590 5118 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.245593 5118 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.245596 5118 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.245600 5118 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.245603 5118 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.245606 5118 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.245610 5118 feature_gate.go:328] unrecognized feature gate: InsightsConfig Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.245613 5118 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.245616 5118 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.245619 5118 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.245622 5118 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.245628 5118 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.245631 5118 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.245634 5118 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.245637 5118 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.245641 5118 feature_gate.go:328] unrecognized feature gate: GatewayAPI Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.245644 5118 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.245670 5118 feature_gate.go:328] unrecognized feature gate: PinnedImages Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.245674 5118 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.245677 5118 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.245681 5118 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.245684 5118 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.245688 5118 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.245691 5118 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.245694 5118 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.245697 5118 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.245701 5118 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.245704 5118 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.245708 5118 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.245711 5118 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.245714 5118 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.245718 5118 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.245727 5118 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.245732 5118 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.245736 5118 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.245740 5118 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.245760 5118 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.245765 5118 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.245768 5118 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.245772 5118 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.245775 5118 feature_gate.go:328] unrecognized feature gate: OVNObservability Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.245779 5118 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.245782 5118 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.245786 5118 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.245790 5118 feature_gate.go:328] unrecognized feature gate: Example2 Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.245793 5118 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.245797 5118 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.245801 5118 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.245804 5118 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.245807 5118 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.245811 5118 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.245814 5118 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.245817 5118 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.245821 5118 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.246399 5118 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.246407 5118 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.246437 5118 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.246444 5118 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.246449 5118 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.246455 5118 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.246460 5118 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.246467 5118 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.246473 5118 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.246478 5118 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.246483 5118 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.246488 5118 feature_gate.go:328] unrecognized feature gate: DualReplica Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.246493 5118 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.246519 5118 feature_gate.go:328] unrecognized feature gate: NewOLM Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.246524 5118 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.246529 5118 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.246534 5118 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.246540 5118 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.246546 5118 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.246551 5118 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.246559 5118 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.246571 5118 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.247216 5118 server.go:962] "Client rotation is on, will bootstrap in background" Dec 08 17:42:23 crc kubenswrapper[5118]: E1208 17:42:23.251438 5118 bootstrap.go:266] "Unhandled Error" err="part of the existing bootstrap client certificate in /var/lib/kubelet/kubeconfig is expired: 2025-12-03 08:27:53 +0000 UTC" logger="UnhandledError" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.254611 5118 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.254733 5118 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.255802 5118 server.go:1019] "Starting client certificate rotation" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.256019 5118 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kube-apiserver-client-kubelet" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.256136 5118 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.263389 5118 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.265226 5118 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 08 17:42:23 crc kubenswrapper[5118]: E1208 17:42:23.268670 5118 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.243:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.277576 5118 log.go:25] "Validated CRI v1 runtime API" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.297412 5118 log.go:25] "Validated CRI v1 image API" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.298959 5118 server.go:1452] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.301321 5118 fs.go:135] Filesystem UUIDs: map[19e76f87-96b8-4794-9744-0b33dca22d5b:/dev/vda3 2025-12-08-17-36-19-00:/dev/sr0 5eb7c122-420e-4494-80ec-41664070d7b6:/dev/vda4 7B77-95E7:/dev/vda2] Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.301356 5118 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:45 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:31 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:44 fsType:tmpfs blockSize:0} composefs_0-33:{mountpoint:/ major:0 minor:33 fsType:overlay blockSize:0}] Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.322126 5118 manager.go:217] Machine: {Timestamp:2025-12-08 17:42:23.320000121 +0000 UTC m=+0.221324275 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33649930240 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:80bc4fba336e4ca1bc9d28a8be52a356 SystemUUID:32c1a977-c4dc-4b4f-b307-ff2a2f4e57f1 BootID:3b244703-86d1-4a74-bdbb-1446f2890ff6 Filesystems:[{Device:/tmp DeviceMajor:0 DeviceMinor:31 Capacity:16824967168 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16824963072 Type:vfs Inodes:4107657 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6729986048 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:composefs_0-33 DeviceMajor:0 DeviceMinor:33 Capacity:6545408 Type:vfs Inodes:18446744073709551615 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:45 Capacity:3364990976 Type:vfs Inodes:821531 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:44 Capacity:1073741824 Type:vfs Inodes:4107657 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:d6:4e:cd Speed:0 Mtu:1500} {Name:br-int MacAddress:b2:a9:9f:57:07:84 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:d6:4e:cd Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:e6:79:2f Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:84:35:42 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:9c:32:12 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:ec:da:21 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:82:f0:8d:d1:7d:4e Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:72:6c:d6:8f:6a:40 Speed:0 Mtu:1500} {Name:tap0 MacAddress:5a:94:ef:e4:0c:ee Speed:10 Mtu:1500}] Topology:[{Id:0 Memory:33649930240 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.322476 5118 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.322650 5118 manager.go:233] Version: {KernelVersion:5.14.0-570.57.1.el9_6.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 9.6.20251021-0 (Plow) DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.323861 5118 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.324066 5118 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.324305 5118 topology_manager.go:138] "Creating topology manager with none policy" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.324319 5118 container_manager_linux.go:306] "Creating device plugin manager" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.324345 5118 manager.go:141] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.324588 5118 server.go:72] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.325050 5118 state_mem.go:36] "Initialized new in-memory state store" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.325236 5118 server.go:1267] "Using root directory" path="/var/lib/kubelet" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.325840 5118 kubelet.go:491] "Attempting to sync node with API server" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.325863 5118 kubelet.go:386] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.326030 5118 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.326055 5118 kubelet.go:397] "Adding apiserver pod source" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.326069 5118 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 08 17:42:23 crc kubenswrapper[5118]: E1208 17:42:23.328114 5118 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.243:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 08 17:42:23 crc kubenswrapper[5118]: E1208 17:42:23.328113 5118 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.243:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.329561 5118 state_checkpoint.go:81] "State checkpoint: restored pod resource state from checkpoint" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.329577 5118 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.331012 5118 state_checkpoint.go:81] "State checkpoint: restored pod resource state from checkpoint" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.331034 5118 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.333229 5118 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="cri-o" version="1.33.5-3.rhaos4.20.gitd0ea985.el9" apiVersion="v1" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.333433 5118 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-server-current.pem" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.333844 5118 kubelet.go:953] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.334279 5118 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.334295 5118 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.334303 5118 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.334312 5118 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.334323 5118 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.334330 5118 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/secret" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.334338 5118 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.334345 5118 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.334361 5118 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/fc" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.334374 5118 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.334385 5118 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/projected" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.334518 5118 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.334741 5118 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/csi" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.334749 5118 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/image" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.336564 5118 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.243:6443: connect: connection refused Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.351125 5118 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.351218 5118 server.go:1295] "Started kubelet" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.351525 5118 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.351663 5118 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.351807 5118 server_v1.go:47] "podresources" method="list" useActivePods=true Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.352250 5118 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 08 17:42:23 crc systemd[1]: Started Kubernetes Kubelet. Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.354006 5118 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.354043 5118 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kubelet-serving" Dec 08 17:42:23 crc kubenswrapper[5118]: E1208 17:42:23.353089 5118 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.243:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187f4e5db65f6d9b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:23.351156123 +0000 UTC m=+0.252480247,LastTimestamp:2025-12-08 17:42:23.351156123 +0000 UTC m=+0.252480247,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.354821 5118 server.go:317] "Adding debug handlers to kubelet server" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.355094 5118 volume_manager.go:295] "The desired_state_of_world populator starts" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.355145 5118 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.355349 5118 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 08 17:42:23 crc kubenswrapper[5118]: E1208 17:42:23.354837 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:42:23 crc kubenswrapper[5118]: E1208 17:42:23.355481 5118 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.243:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 08 17:42:23 crc kubenswrapper[5118]: E1208 17:42:23.356104 5118 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.243:6443: connect: connection refused" interval="200ms" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.358348 5118 factory.go:221] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.358380 5118 factory.go:55] Registering systemd factory Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.358394 5118 factory.go:223] Registration of the systemd container factory successfully Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.359474 5118 factory.go:153] Registering CRI-O factory Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.359509 5118 factory.go:223] Registration of the crio container factory successfully Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.359540 5118 factory.go:103] Registering Raw factory Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.359564 5118 manager.go:1196] Started watching for new ooms in manager Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.360489 5118 manager.go:319] Starting recovery of all containers Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.388107 5118 manager.go:324] Recovery completed Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.401915 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" volumeName="kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.401975 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.401990 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.402001 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.402015 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.402028 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.402041 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.402052 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.402068 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.402084 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.404423 5118 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b1264ac67579ad07e7e9003054d44fe40dd55285a4b2f7dc74e48be1aee0868a/globalmount" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.404532 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.404555 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.404572 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.404588 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.404614 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.404633 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.404650 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3a14caf222afb62aaabdc47808b6f944" volumeName="kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.404668 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.404687 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.404745 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.404764 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.404782 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.404799 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.404816 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.404832 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.404850 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.404867 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.404947 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.404998 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.405019 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.405062 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.405083 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.405100 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.405117 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.405135 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.405152 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.405169 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.405186 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.405205 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.405222 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.405239 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ee8fbd3-1f81-4666-96da-5afc70819f1a" volumeName="kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.405257 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.405274 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.405291 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.405331 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.405349 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" volumeName="kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.405367 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.405385 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.405403 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.405419 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" volumeName="kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.405435 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.405453 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.405469 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.405489 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.405506 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.405523 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.405551 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.405569 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.405586 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.405628 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a208c9c2-333b-4b4a-be0d-bc32ec38a821" volumeName="kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.405646 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.405664 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="42a11a02-47e1-488f-b270-2679d3298b0e" volumeName="kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.405680 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.405698 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.406025 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.406057 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.406076 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5ebfebf6-3ecd-458e-943f-bb25b52e2718" volumeName="kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.406094 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.406110 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.406127 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.406144 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a208c9c2-333b-4b4a-be0d-bc32ec38a821" volumeName="kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.406162 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20c5c5b4bed930554494851fe3cb2b2a" volumeName="kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.406179 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.406197 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.406216 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.406233 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.406251 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.406281 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.406300 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.406317 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.406337 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.406356 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.406373 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="34177974-8d82-49d2-a763-391d0df3bbd8" volumeName="kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.406389 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.406405 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.406421 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.406439 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.406457 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.406475 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" volumeName="kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.406490 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.406509 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.406526 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.406542 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.406558 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.406573 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.406588 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.406605 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.406624 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.406641 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.406658 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.406675 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.406691 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7e2c886-118e-43bb-bef1-c78134de392b" volumeName="kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.406744 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.406760 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.406776 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.406795 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.406814 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.406829 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.406847 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.406865 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.406903 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.406924 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.406960 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.406978 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.406993 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.407011 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5ebfebf6-3ecd-458e-943f-bb25b52e2718" volumeName="kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.407026 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.407041 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.407058 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.407075 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.407090 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af41de71-79cf-4590-bbe9-9e8b848862cb" volumeName="kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.407106 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.407120 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.407135 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.407150 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.407167 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.407183 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.407200 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" volumeName="kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.407216 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.407232 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.407249 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.407266 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.407350 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.407370 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.407435 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.407454 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.407471 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.407492 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.407512 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.407530 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.407550 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.407566 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.407584 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.407602 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.407620 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.407636 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.407654 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.407671 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.407690 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.407708 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.407728 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" volumeName="kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.407744 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.407759 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.407777 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.407795 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.407813 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.408043 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.408064 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.408081 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" volumeName="kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.408100 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0effdbcf-dd7d-404d-9d48-77536d665a5d" volumeName="kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.408117 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.408135 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b638b8f4bb0070e40528db779baf6a2" volumeName="kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.408153 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.408169 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.408185 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.408202 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.408218 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.408236 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.408256 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.408273 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.408290 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.408306 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.408328 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f863fff9-286a-45fa-b8f0-8a86994b8440" volumeName="kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.408344 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.408359 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.408377 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.408392 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.408406 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.408420 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.408437 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.408453 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.408469 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" volumeName="kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.408484 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.408499 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.408516 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.408533 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.408550 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.408567 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" volumeName="kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.408583 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.408598 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.408614 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="17b87002-b798-480a-8e17-83053d698239" volumeName="kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.408630 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ee8fbd3-1f81-4666-96da-5afc70819f1a" volumeName="kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.408667 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.408684 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" volumeName="kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.408699 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.408716 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.408731 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.408747 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.408763 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.408778 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.408794 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.408811 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.408829 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.408845 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.408860 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.409091 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7e2c886-118e-43bb-bef1-c78134de392b" volumeName="kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.409111 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.409129 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.409145 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.409160 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.409177 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.409193 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.409209 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.409228 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.409244 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.409259 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.409275 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.409291 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.409308 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.409334 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.409349 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.409366 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.409381 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.409397 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.409458 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.409476 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="42a11a02-47e1-488f-b270-2679d3298b0e" volumeName="kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.409492 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.409509 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.409524 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.409539 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.409555 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.409570 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.409585 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e093be35-bb62-4843-b2e8-094545761610" volumeName="kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.409600 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3a14caf222afb62aaabdc47808b6f944" volumeName="kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.409617 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.409633 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.409686 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.409701 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="34177974-8d82-49d2-a763-391d0df3bbd8" volumeName="kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.409716 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.409733 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.409748 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.409763 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content" seLinuxMountContext="" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.409779 5118 reconstruct.go:97] "Volume reconstruction finished" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.409789 5118 reconciler.go:26] "Reconciler: start to sync state" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.413442 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.415439 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.415483 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.415498 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.417238 5118 cpu_manager.go:222] "Starting CPU manager" policy="none" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.417255 5118 cpu_manager.go:223] "Reconciling" reconcilePeriod="10s" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.417290 5118 state_mem.go:36] "Initialized new in-memory state store" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.421347 5118 policy_none.go:49] "None policy: Start" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.421367 5118 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.421380 5118 state_mem.go:35] "Initializing new in-memory state store" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.421374 5118 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.425465 5118 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.425626 5118 status_manager.go:230] "Starting to sync pod status with apiserver" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.425651 5118 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.425660 5118 kubelet.go:2451] "Starting kubelet main sync loop" Dec 08 17:42:23 crc kubenswrapper[5118]: E1208 17:42:23.425748 5118 kubelet.go:2475] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 08 17:42:23 crc kubenswrapper[5118]: E1208 17:42:23.426157 5118 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.243:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 08 17:42:23 crc kubenswrapper[5118]: E1208 17:42:23.455994 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.462161 5118 manager.go:341] "Starting Device Plugin manager" Dec 08 17:42:23 crc kubenswrapper[5118]: E1208 17:42:23.462309 5118 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.462323 5118 server.go:85] "Starting device plugin registration server" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.462691 5118 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.462715 5118 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.463017 5118 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.463102 5118 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.463116 5118 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 08 17:42:23 crc kubenswrapper[5118]: E1208 17:42:23.467419 5118 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="non-existent label \"crio-containers\"" Dec 08 17:42:23 crc kubenswrapper[5118]: E1208 17:42:23.467477 5118 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.525848 5118 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.526043 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.526749 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.526819 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.526838 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.527960 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.528052 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.528089 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.528577 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.528618 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.528629 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.529208 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.529310 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.529330 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.529341 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.529428 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.529446 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.529667 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.529685 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.529697 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.530249 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.530282 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.530290 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.530384 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.530587 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.530690 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.531964 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.531985 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.532002 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.532325 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.532367 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.532386 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.532571 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.532623 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.532650 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.533105 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.533131 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.533144 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.533207 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.533245 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.533258 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.534933 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.534979 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.536375 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.536455 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.536996 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:23 crc kubenswrapper[5118]: E1208 17:42:23.556965 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:42:23 crc kubenswrapper[5118]: E1208 17:42:23.557154 5118 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.243:6443: connect: connection refused" interval="400ms" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.563144 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.564522 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.564565 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.564580 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.564605 5118 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 08 17:42:23 crc kubenswrapper[5118]: E1208 17:42:23.566355 5118 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.243:6443: connect: connection refused" node="crc" Dec 08 17:42:23 crc kubenswrapper[5118]: E1208 17:42:23.569339 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:42:23 crc kubenswrapper[5118]: E1208 17:42:23.591636 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:42:23 crc kubenswrapper[5118]: E1208 17:42:23.612156 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.612166 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.612404 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.612491 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.612533 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.612565 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.612603 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.612634 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.612669 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.612695 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.612747 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.612767 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.612790 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.612808 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.612828 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.612846 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.612864 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.612904 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.612926 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.612946 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.612967 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.612999 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.613021 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-run-kubernetes\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.613042 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.613012 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.613305 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.613583 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.613767 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-run-kubernetes\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.613859 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.613905 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.614027 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 17:42:23 crc kubenswrapper[5118]: E1208 17:42:23.618901 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.714264 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.714308 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.714333 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.714371 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.714402 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.714430 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.714444 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.714459 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.714479 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.714492 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.714507 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.714521 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.714522 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.714575 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.714536 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.714614 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.714646 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.714648 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.714667 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.714684 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.714705 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.714711 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.714732 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.714490 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.714759 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.714780 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.714801 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.714824 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.714845 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.714866 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.714688 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.714447 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.767489 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.768365 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.768411 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.768435 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.768466 5118 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 08 17:42:23 crc kubenswrapper[5118]: E1208 17:42:23.768913 5118 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.243:6443: connect: connection refused" node="crc" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.858303 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.870244 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.884636 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e08c320b1e9e2405e6e0107bdf7eeb4.slice/crio-da7997bd16cfa5c6cb8a6cf12c66e25ecb7d98652687edeb0ce752bcfdbde218 WatchSource:0}: Error finding container da7997bd16cfa5c6cb8a6cf12c66e25ecb7d98652687edeb0ce752bcfdbde218: Status 404 returned error can't find the container with id da7997bd16cfa5c6cb8a6cf12c66e25ecb7d98652687edeb0ce752bcfdbde218 Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.892498 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.895216 5118 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.912545 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.917526 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod20c5c5b4bed930554494851fe3cb2b2a.slice/crio-9a6f176f4af4772e871f27fe65646fd475fb89c71718e6d563459b6f0c9fc045 WatchSource:0}: Error finding container 9a6f176f4af4772e871f27fe65646fd475fb89c71718e6d563459b6f0c9fc045: Status 404 returned error can't find the container with id 9a6f176f4af4772e871f27fe65646fd475fb89c71718e6d563459b6f0c9fc045 Dec 08 17:42:23 crc kubenswrapper[5118]: I1208 17:42:23.919084 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 17:42:23 crc kubenswrapper[5118]: W1208 17:42:23.935721 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3a14caf222afb62aaabdc47808b6f944.slice/crio-b68168949e69a54d73b40bacce82f3c664b018997a62706572d57f326fb33e18 WatchSource:0}: Error finding container b68168949e69a54d73b40bacce82f3c664b018997a62706572d57f326fb33e18: Status 404 returned error can't find the container with id b68168949e69a54d73b40bacce82f3c664b018997a62706572d57f326fb33e18 Dec 08 17:42:23 crc kubenswrapper[5118]: E1208 17:42:23.958684 5118 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.243:6443: connect: connection refused" interval="800ms" Dec 08 17:42:24 crc kubenswrapper[5118]: I1208 17:42:24.169746 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:42:24 crc kubenswrapper[5118]: I1208 17:42:24.171686 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:24 crc kubenswrapper[5118]: I1208 17:42:24.171747 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:24 crc kubenswrapper[5118]: I1208 17:42:24.171762 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:24 crc kubenswrapper[5118]: I1208 17:42:24.171792 5118 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 08 17:42:24 crc kubenswrapper[5118]: E1208 17:42:24.172372 5118 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.243:6443: connect: connection refused" node="crc" Dec 08 17:42:24 crc kubenswrapper[5118]: I1208 17:42:24.337760 5118 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.243:6443: connect: connection refused Dec 08 17:42:24 crc kubenswrapper[5118]: E1208 17:42:24.338815 5118 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.243:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 08 17:42:24 crc kubenswrapper[5118]: E1208 17:42:24.353114 5118 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.243:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 08 17:42:24 crc kubenswrapper[5118]: I1208 17:42:24.433860 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"cfca8d494212b21c8a44513c5dd06e44549b08479d7bf1138bd5fb15936ccee8"} Dec 08 17:42:24 crc kubenswrapper[5118]: I1208 17:42:24.433950 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"96cd76fe3fd0d346629231863e0ab81a754a621d59c4d4581acdd3de50033012"} Dec 08 17:42:24 crc kubenswrapper[5118]: I1208 17:42:24.435447 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"79d28781c46437a1fa8bbb18bad40812f011e8b4b26403d391ebe33b2f638fce"} Dec 08 17:42:24 crc kubenswrapper[5118]: I1208 17:42:24.435476 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"b68168949e69a54d73b40bacce82f3c664b018997a62706572d57f326fb33e18"} Dec 08 17:42:24 crc kubenswrapper[5118]: I1208 17:42:24.435629 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:42:24 crc kubenswrapper[5118]: I1208 17:42:24.436316 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:24 crc kubenswrapper[5118]: I1208 17:42:24.436355 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:24 crc kubenswrapper[5118]: I1208 17:42:24.436367 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:24 crc kubenswrapper[5118]: E1208 17:42:24.436622 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:42:24 crc kubenswrapper[5118]: I1208 17:42:24.438203 5118 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="db2010fe01dd1531a380e379780f6b723d7fbe3853fa0164bd965932a3bf9985" exitCode=0 Dec 08 17:42:24 crc kubenswrapper[5118]: I1208 17:42:24.438240 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"db2010fe01dd1531a380e379780f6b723d7fbe3853fa0164bd965932a3bf9985"} Dec 08 17:42:24 crc kubenswrapper[5118]: I1208 17:42:24.438279 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"9a6f176f4af4772e871f27fe65646fd475fb89c71718e6d563459b6f0c9fc045"} Dec 08 17:42:24 crc kubenswrapper[5118]: I1208 17:42:24.438436 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:42:24 crc kubenswrapper[5118]: I1208 17:42:24.439198 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:24 crc kubenswrapper[5118]: I1208 17:42:24.439318 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:24 crc kubenswrapper[5118]: I1208 17:42:24.439331 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:24 crc kubenswrapper[5118]: E1208 17:42:24.439916 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:42:24 crc kubenswrapper[5118]: I1208 17:42:24.439983 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"33c98a2d2f7f5ee864d29cb06b16d3fd3fbc99f98965d7c3a2c0f34bae6545c8"} Dec 08 17:42:24 crc kubenswrapper[5118]: I1208 17:42:24.440017 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"b9e7d00c25b2b416721e6ed6315d8459db9ff9e871c8cab4fa78cd3712d23402"} Dec 08 17:42:24 crc kubenswrapper[5118]: I1208 17:42:24.440112 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:42:24 crc kubenswrapper[5118]: I1208 17:42:24.440632 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:24 crc kubenswrapper[5118]: I1208 17:42:24.440658 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:24 crc kubenswrapper[5118]: I1208 17:42:24.440668 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:24 crc kubenswrapper[5118]: E1208 17:42:24.440797 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:42:24 crc kubenswrapper[5118]: I1208 17:42:24.442514 5118 generic.go:358] "Generic (PLEG): container finished" podID="4e08c320b1e9e2405e6e0107bdf7eeb4" containerID="dd17024ed34c33fec3e296a713e5e14bef7dcfa92e492b6f8bddb105a5f0d9d2" exitCode=0 Dec 08 17:42:24 crc kubenswrapper[5118]: I1208 17:42:24.442548 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerDied","Data":"dd17024ed34c33fec3e296a713e5e14bef7dcfa92e492b6f8bddb105a5f0d9d2"} Dec 08 17:42:24 crc kubenswrapper[5118]: I1208 17:42:24.442566 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerStarted","Data":"da7997bd16cfa5c6cb8a6cf12c66e25ecb7d98652687edeb0ce752bcfdbde218"} Dec 08 17:42:24 crc kubenswrapper[5118]: I1208 17:42:24.442635 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:42:24 crc kubenswrapper[5118]: I1208 17:42:24.443122 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:24 crc kubenswrapper[5118]: I1208 17:42:24.443149 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:24 crc kubenswrapper[5118]: I1208 17:42:24.443160 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:24 crc kubenswrapper[5118]: E1208 17:42:24.443320 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:42:24 crc kubenswrapper[5118]: E1208 17:42:24.696455 5118 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.243:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 08 17:42:24 crc kubenswrapper[5118]: E1208 17:42:24.760326 5118 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.243:6443: connect: connection refused" interval="1.6s" Dec 08 17:42:24 crc kubenswrapper[5118]: E1208 17:42:24.934836 5118 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.243:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 08 17:42:24 crc kubenswrapper[5118]: I1208 17:42:24.972668 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:42:24 crc kubenswrapper[5118]: I1208 17:42:24.974193 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:24 crc kubenswrapper[5118]: I1208 17:42:24.974254 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:24 crc kubenswrapper[5118]: I1208 17:42:24.974269 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:24 crc kubenswrapper[5118]: I1208 17:42:24.974297 5118 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 08 17:42:24 crc kubenswrapper[5118]: E1208 17:42:24.974807 5118 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.243:6443: connect: connection refused" node="crc" Dec 08 17:42:25 crc kubenswrapper[5118]: I1208 17:42:25.338143 5118 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.243:6443: connect: connection refused Dec 08 17:42:25 crc kubenswrapper[5118]: I1208 17:42:25.363489 5118 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Dec 08 17:42:25 crc kubenswrapper[5118]: E1208 17:42:25.365101 5118 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.243:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 08 17:42:25 crc kubenswrapper[5118]: I1208 17:42:25.446907 5118 generic.go:358] "Generic (PLEG): container finished" podID="0b638b8f4bb0070e40528db779baf6a2" containerID="33c98a2d2f7f5ee864d29cb06b16d3fd3fbc99f98965d7c3a2c0f34bae6545c8" exitCode=0 Dec 08 17:42:25 crc kubenswrapper[5118]: I1208 17:42:25.446999 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerDied","Data":"33c98a2d2f7f5ee864d29cb06b16d3fd3fbc99f98965d7c3a2c0f34bae6545c8"} Dec 08 17:42:25 crc kubenswrapper[5118]: I1208 17:42:25.447050 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"677558b2bc6e63519df956a2a4d0a56e3e5bd5be9da243e1cf7f8c93a152c362"} Dec 08 17:42:25 crc kubenswrapper[5118]: I1208 17:42:25.447064 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"fe54a700b3e33ab05122ea3de8d4c794b951deec933a88704f0bcb0ffa22893f"} Dec 08 17:42:25 crc kubenswrapper[5118]: I1208 17:42:25.447075 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"f3440b4452b60509e3e040cf572e5e60894bc9a6567e9de291b7da6fdf682fa8"} Dec 08 17:42:25 crc kubenswrapper[5118]: I1208 17:42:25.447248 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:42:25 crc kubenswrapper[5118]: I1208 17:42:25.448084 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:25 crc kubenswrapper[5118]: I1208 17:42:25.448084 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerStarted","Data":"f4eadc29321fff86ab58be2c14459298a72ab5e872e7059d7e3d0bc5492c9504"} Dec 08 17:42:25 crc kubenswrapper[5118]: I1208 17:42:25.448127 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:25 crc kubenswrapper[5118]: I1208 17:42:25.448230 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:25 crc kubenswrapper[5118]: I1208 17:42:25.448292 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:42:25 crc kubenswrapper[5118]: E1208 17:42:25.448439 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:42:25 crc kubenswrapper[5118]: I1208 17:42:25.449434 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:25 crc kubenswrapper[5118]: I1208 17:42:25.449471 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:25 crc kubenswrapper[5118]: I1208 17:42:25.449488 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:25 crc kubenswrapper[5118]: E1208 17:42:25.449720 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:42:25 crc kubenswrapper[5118]: I1208 17:42:25.462200 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:42:25 crc kubenswrapper[5118]: I1208 17:42:25.462432 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"f4eb252f824da37bd546e141c2cc9badc7adb7d51a3ea2f9270311119403c238"} Dec 08 17:42:25 crc kubenswrapper[5118]: I1208 17:42:25.462498 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"7d530b6dea7712c8b4797040bc123b6178ce49d36eaf7d649d8ed2d19ac499dd"} Dec 08 17:42:25 crc kubenswrapper[5118]: I1208 17:42:25.462511 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"34a764685c0c2e874d1089a49379fd329c1f930a48eb0ddf1ef2647c6603393a"} Dec 08 17:42:25 crc kubenswrapper[5118]: I1208 17:42:25.463855 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:25 crc kubenswrapper[5118]: I1208 17:42:25.463909 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:25 crc kubenswrapper[5118]: I1208 17:42:25.463925 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:25 crc kubenswrapper[5118]: E1208 17:42:25.464149 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:42:25 crc kubenswrapper[5118]: I1208 17:42:25.465382 5118 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="79d28781c46437a1fa8bbb18bad40812f011e8b4b26403d391ebe33b2f638fce" exitCode=0 Dec 08 17:42:25 crc kubenswrapper[5118]: I1208 17:42:25.465446 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"79d28781c46437a1fa8bbb18bad40812f011e8b4b26403d391ebe33b2f638fce"} Dec 08 17:42:25 crc kubenswrapper[5118]: I1208 17:42:25.465591 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:42:25 crc kubenswrapper[5118]: I1208 17:42:25.466029 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:25 crc kubenswrapper[5118]: I1208 17:42:25.466063 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:25 crc kubenswrapper[5118]: I1208 17:42:25.466076 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:25 crc kubenswrapper[5118]: E1208 17:42:25.466264 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:42:25 crc kubenswrapper[5118]: I1208 17:42:25.467609 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:42:25 crc kubenswrapper[5118]: I1208 17:42:25.467848 5118 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="e6ad40373544de6ece1195201d79dbb79f803fd3816a7af329f7ab48a7a6bb62" exitCode=0 Dec 08 17:42:25 crc kubenswrapper[5118]: I1208 17:42:25.467893 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"e6ad40373544de6ece1195201d79dbb79f803fd3816a7af329f7ab48a7a6bb62"} Dec 08 17:42:25 crc kubenswrapper[5118]: I1208 17:42:25.468032 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:42:25 crc kubenswrapper[5118]: I1208 17:42:25.468608 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:25 crc kubenswrapper[5118]: I1208 17:42:25.468636 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:25 crc kubenswrapper[5118]: I1208 17:42:25.468648 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:25 crc kubenswrapper[5118]: I1208 17:42:25.468675 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:25 crc kubenswrapper[5118]: I1208 17:42:25.468744 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:25 crc kubenswrapper[5118]: I1208 17:42:25.468763 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:25 crc kubenswrapper[5118]: E1208 17:42:25.469223 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:42:26 crc kubenswrapper[5118]: I1208 17:42:26.474173 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"707ed6c728069793a2ce86dba7bbe414fa9c0bad4f7b3abf19fa593aeefd207e"} Dec 08 17:42:26 crc kubenswrapper[5118]: I1208 17:42:26.474218 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"6fb32b04b12f0a1150681226f15f28429f8eb6bd7fa0a3b9d55412dc59619957"} Dec 08 17:42:26 crc kubenswrapper[5118]: I1208 17:42:26.474232 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"e46a7d797e71812da483916f6a5d5f9c04a83282a920a12bf84ab33b81c72425"} Dec 08 17:42:26 crc kubenswrapper[5118]: I1208 17:42:26.474241 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"88c46ed55f61960077efe0009a715acb46c877307a6d5e8d2bbb1b1c940351c8"} Dec 08 17:42:26 crc kubenswrapper[5118]: I1208 17:42:26.477768 5118 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="e0a2afa4b87482acb27049ccaeb319375e4c7df569ddbb8e45f27ee091a3deba" exitCode=0 Dec 08 17:42:26 crc kubenswrapper[5118]: I1208 17:42:26.477853 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"e0a2afa4b87482acb27049ccaeb319375e4c7df569ddbb8e45f27ee091a3deba"} Dec 08 17:42:26 crc kubenswrapper[5118]: I1208 17:42:26.478051 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:42:26 crc kubenswrapper[5118]: I1208 17:42:26.478088 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:42:26 crc kubenswrapper[5118]: I1208 17:42:26.479022 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:26 crc kubenswrapper[5118]: I1208 17:42:26.479034 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:26 crc kubenswrapper[5118]: I1208 17:42:26.479056 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:26 crc kubenswrapper[5118]: I1208 17:42:26.479061 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:26 crc kubenswrapper[5118]: I1208 17:42:26.479067 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:26 crc kubenswrapper[5118]: I1208 17:42:26.479072 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:26 crc kubenswrapper[5118]: E1208 17:42:26.479381 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:42:26 crc kubenswrapper[5118]: E1208 17:42:26.479570 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:42:26 crc kubenswrapper[5118]: I1208 17:42:26.575339 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:42:26 crc kubenswrapper[5118]: I1208 17:42:26.576288 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:26 crc kubenswrapper[5118]: I1208 17:42:26.576321 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:26 crc kubenswrapper[5118]: I1208 17:42:26.576330 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:26 crc kubenswrapper[5118]: I1208 17:42:26.576352 5118 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 08 17:42:26 crc kubenswrapper[5118]: I1208 17:42:26.926249 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 08 17:42:26 crc kubenswrapper[5118]: I1208 17:42:26.926421 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:42:26 crc kubenswrapper[5118]: I1208 17:42:26.926868 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:26 crc kubenswrapper[5118]: I1208 17:42:26.926907 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:26 crc kubenswrapper[5118]: I1208 17:42:26.926918 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:26 crc kubenswrapper[5118]: E1208 17:42:26.927152 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:42:27 crc kubenswrapper[5118]: I1208 17:42:27.485803 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"607593e5fe21fff98c7d3cba59d9e08df38aeacb3b9a57dc2840751b37b3ab6a"} Dec 08 17:42:27 crc kubenswrapper[5118]: I1208 17:42:27.486003 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:42:27 crc kubenswrapper[5118]: I1208 17:42:27.487650 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:27 crc kubenswrapper[5118]: I1208 17:42:27.487697 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:27 crc kubenswrapper[5118]: I1208 17:42:27.487712 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:27 crc kubenswrapper[5118]: E1208 17:42:27.487988 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:42:27 crc kubenswrapper[5118]: I1208 17:42:27.491163 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"381d48bcc9031293985a6617b3d7b8c1380512ee26a4eb315ff75b4da6f7b833"} Dec 08 17:42:27 crc kubenswrapper[5118]: I1208 17:42:27.491206 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"b08b50f92322d10d9703a7dc7a50454125a3bb17188bd93d128fc501b59c858e"} Dec 08 17:42:27 crc kubenswrapper[5118]: I1208 17:42:27.491219 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"c0afe4db4de9c4b0acfd2033320086fda547808d137b96aa9c044c490b198e18"} Dec 08 17:42:27 crc kubenswrapper[5118]: I1208 17:42:27.491231 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"2236fc4f8452daf4bda045181026dd6161247da1d42f11cbea29ea3ca077a3ed"} Dec 08 17:42:28 crc kubenswrapper[5118]: I1208 17:42:28.179502 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:42:28 crc kubenswrapper[5118]: I1208 17:42:28.499053 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"38e3197b5ae985bbeea1f9b6b6c31fb3b7f637f07bf9d5928047f1757501679c"} Dec 08 17:42:28 crc kubenswrapper[5118]: I1208 17:42:28.499203 5118 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 08 17:42:28 crc kubenswrapper[5118]: I1208 17:42:28.499264 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:42:28 crc kubenswrapper[5118]: I1208 17:42:28.499322 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:42:28 crc kubenswrapper[5118]: I1208 17:42:28.499972 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:28 crc kubenswrapper[5118]: I1208 17:42:28.500030 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:28 crc kubenswrapper[5118]: I1208 17:42:28.500055 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:28 crc kubenswrapper[5118]: E1208 17:42:28.500749 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:42:28 crc kubenswrapper[5118]: I1208 17:42:28.500766 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:28 crc kubenswrapper[5118]: I1208 17:42:28.500855 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:28 crc kubenswrapper[5118]: I1208 17:42:28.500922 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:28 crc kubenswrapper[5118]: E1208 17:42:28.501257 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:42:29 crc kubenswrapper[5118]: I1208 17:42:29.188462 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 17:42:29 crc kubenswrapper[5118]: I1208 17:42:29.188704 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:42:29 crc kubenswrapper[5118]: I1208 17:42:29.189911 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:29 crc kubenswrapper[5118]: I1208 17:42:29.189961 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:29 crc kubenswrapper[5118]: I1208 17:42:29.189976 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:29 crc kubenswrapper[5118]: E1208 17:42:29.190369 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:42:29 crc kubenswrapper[5118]: I1208 17:42:29.392594 5118 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Dec 08 17:42:29 crc kubenswrapper[5118]: I1208 17:42:29.502208 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:42:29 crc kubenswrapper[5118]: I1208 17:42:29.504723 5118 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 08 17:42:29 crc kubenswrapper[5118]: I1208 17:42:29.504790 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:42:29 crc kubenswrapper[5118]: I1208 17:42:29.511528 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:29 crc kubenswrapper[5118]: I1208 17:42:29.511596 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:29 crc kubenswrapper[5118]: I1208 17:42:29.511625 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:29 crc kubenswrapper[5118]: I1208 17:42:29.512932 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:29 crc kubenswrapper[5118]: I1208 17:42:29.512979 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:29 crc kubenswrapper[5118]: I1208 17:42:29.512999 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:29 crc kubenswrapper[5118]: E1208 17:42:29.513509 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:42:29 crc kubenswrapper[5118]: E1208 17:42:29.513918 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:42:29 crc kubenswrapper[5118]: I1208 17:42:29.839532 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-etcd/etcd-crc" Dec 08 17:42:30 crc kubenswrapper[5118]: I1208 17:42:30.504408 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:42:30 crc kubenswrapper[5118]: I1208 17:42:30.505791 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:30 crc kubenswrapper[5118]: I1208 17:42:30.505847 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:30 crc kubenswrapper[5118]: I1208 17:42:30.505867 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:30 crc kubenswrapper[5118]: E1208 17:42:30.506825 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:42:31 crc kubenswrapper[5118]: I1208 17:42:31.045869 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:42:31 crc kubenswrapper[5118]: I1208 17:42:31.046215 5118 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 08 17:42:31 crc kubenswrapper[5118]: I1208 17:42:31.046286 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:42:31 crc kubenswrapper[5118]: I1208 17:42:31.047747 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:31 crc kubenswrapper[5118]: I1208 17:42:31.047826 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:31 crc kubenswrapper[5118]: I1208 17:42:31.047847 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:31 crc kubenswrapper[5118]: E1208 17:42:31.048597 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:42:31 crc kubenswrapper[5118]: I1208 17:42:31.665733 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:42:31 crc kubenswrapper[5118]: I1208 17:42:31.666137 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:42:31 crc kubenswrapper[5118]: I1208 17:42:31.667366 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:31 crc kubenswrapper[5118]: I1208 17:42:31.667409 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:31 crc kubenswrapper[5118]: I1208 17:42:31.667423 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:31 crc kubenswrapper[5118]: E1208 17:42:31.667803 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:42:31 crc kubenswrapper[5118]: I1208 17:42:31.887166 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 17:42:31 crc kubenswrapper[5118]: I1208 17:42:31.887457 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:42:31 crc kubenswrapper[5118]: I1208 17:42:31.888485 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:31 crc kubenswrapper[5118]: I1208 17:42:31.888533 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:31 crc kubenswrapper[5118]: I1208 17:42:31.888550 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:31 crc kubenswrapper[5118]: E1208 17:42:31.888906 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:42:31 crc kubenswrapper[5118]: I1208 17:42:31.898939 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 17:42:32 crc kubenswrapper[5118]: I1208 17:42:32.220104 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 17:42:32 crc kubenswrapper[5118]: I1208 17:42:32.513263 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:42:32 crc kubenswrapper[5118]: I1208 17:42:32.514207 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:32 crc kubenswrapper[5118]: I1208 17:42:32.514290 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:32 crc kubenswrapper[5118]: I1208 17:42:32.514312 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:32 crc kubenswrapper[5118]: E1208 17:42:32.514766 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:42:33 crc kubenswrapper[5118]: E1208 17:42:33.468110 5118 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 08 17:42:33 crc kubenswrapper[5118]: I1208 17:42:33.515664 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:42:33 crc kubenswrapper[5118]: I1208 17:42:33.516390 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:33 crc kubenswrapper[5118]: I1208 17:42:33.516457 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:33 crc kubenswrapper[5118]: I1208 17:42:33.516484 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:33 crc kubenswrapper[5118]: E1208 17:42:33.517164 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:42:35 crc kubenswrapper[5118]: I1208 17:42:35.423324 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 17:42:35 crc kubenswrapper[5118]: I1208 17:42:35.423582 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:42:35 crc kubenswrapper[5118]: I1208 17:42:35.424592 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:35 crc kubenswrapper[5118]: I1208 17:42:35.424646 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:35 crc kubenswrapper[5118]: I1208 17:42:35.424658 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:35 crc kubenswrapper[5118]: E1208 17:42:35.425074 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:42:35 crc kubenswrapper[5118]: I1208 17:42:35.430028 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 17:42:35 crc kubenswrapper[5118]: I1208 17:42:35.527579 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:42:35 crc kubenswrapper[5118]: I1208 17:42:35.528420 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:35 crc kubenswrapper[5118]: I1208 17:42:35.528471 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:35 crc kubenswrapper[5118]: I1208 17:42:35.528486 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:35 crc kubenswrapper[5118]: E1208 17:42:35.528870 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:42:36 crc kubenswrapper[5118]: I1208 17:42:36.337661 5118 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Dec 08 17:42:36 crc kubenswrapper[5118]: E1208 17:42:36.362534 5118 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="3.2s" Dec 08 17:42:36 crc kubenswrapper[5118]: I1208 17:42:36.555399 5118 trace.go:236] Trace[118638552]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (08-Dec-2025 17:42:26.553) (total time: 10001ms): Dec 08 17:42:36 crc kubenswrapper[5118]: Trace[118638552]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (17:42:36.555) Dec 08 17:42:36 crc kubenswrapper[5118]: Trace[118638552]: [10.001884756s] [10.001884756s] END Dec 08 17:42:36 crc kubenswrapper[5118]: E1208 17:42:36.555442 5118 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 08 17:42:36 crc kubenswrapper[5118]: E1208 17:42:36.577051 5118 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="crc" Dec 08 17:42:36 crc kubenswrapper[5118]: I1208 17:42:36.798531 5118 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Dec 08 17:42:36 crc kubenswrapper[5118]: I1208 17:42:36.798640 5118 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Dec 08 17:42:36 crc kubenswrapper[5118]: I1208 17:42:36.931581 5118 trace.go:236] Trace[74213136]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (08-Dec-2025 17:42:26.930) (total time: 10000ms): Dec 08 17:42:36 crc kubenswrapper[5118]: Trace[74213136]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10000ms (17:42:36.931) Dec 08 17:42:36 crc kubenswrapper[5118]: Trace[74213136]: [10.000766321s] [10.000766321s] END Dec 08 17:42:36 crc kubenswrapper[5118]: E1208 17:42:36.931624 5118 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 08 17:42:37 crc kubenswrapper[5118]: I1208 17:42:37.236547 5118 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 08 17:42:37 crc kubenswrapper[5118]: I1208 17:42:37.236611 5118 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 08 17:42:37 crc kubenswrapper[5118]: I1208 17:42:37.244703 5118 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 08 17:42:37 crc kubenswrapper[5118]: I1208 17:42:37.244766 5118 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 08 17:42:37 crc kubenswrapper[5118]: I1208 17:42:37.760481 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Dec 08 17:42:37 crc kubenswrapper[5118]: I1208 17:42:37.761048 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:42:37 crc kubenswrapper[5118]: I1208 17:42:37.762252 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:37 crc kubenswrapper[5118]: I1208 17:42:37.762365 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:37 crc kubenswrapper[5118]: I1208 17:42:37.762396 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:37 crc kubenswrapper[5118]: E1208 17:42:37.763220 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:42:37 crc kubenswrapper[5118]: I1208 17:42:37.795759 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Dec 08 17:42:38 crc kubenswrapper[5118]: I1208 17:42:38.424248 5118 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 08 17:42:38 crc kubenswrapper[5118]: I1208 17:42:38.424332 5118 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 08 17:42:38 crc kubenswrapper[5118]: I1208 17:42:38.536267 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:42:38 crc kubenswrapper[5118]: I1208 17:42:38.536825 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:38 crc kubenswrapper[5118]: I1208 17:42:38.536935 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:38 crc kubenswrapper[5118]: I1208 17:42:38.537004 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:38 crc kubenswrapper[5118]: E1208 17:42:38.537371 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:42:38 crc kubenswrapper[5118]: I1208 17:42:38.554311 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Dec 08 17:42:39 crc kubenswrapper[5118]: I1208 17:42:39.540671 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:42:39 crc kubenswrapper[5118]: I1208 17:42:39.542735 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:39 crc kubenswrapper[5118]: I1208 17:42:39.542814 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:39 crc kubenswrapper[5118]: I1208 17:42:39.542834 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:39 crc kubenswrapper[5118]: E1208 17:42:39.543825 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:42:39 crc kubenswrapper[5118]: E1208 17:42:39.571637 5118 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="6.4s" Dec 08 17:42:39 crc kubenswrapper[5118]: I1208 17:42:39.777249 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:42:39 crc kubenswrapper[5118]: I1208 17:42:39.779385 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:39 crc kubenswrapper[5118]: I1208 17:42:39.779429 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:39 crc kubenswrapper[5118]: I1208 17:42:39.779439 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:39 crc kubenswrapper[5118]: I1208 17:42:39.779467 5118 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 08 17:42:39 crc kubenswrapper[5118]: E1208 17:42:39.795800 5118 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 08 17:42:41 crc kubenswrapper[5118]: I1208 17:42:41.053048 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:42:41 crc kubenswrapper[5118]: I1208 17:42:41.054290 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:42:41 crc kubenswrapper[5118]: I1208 17:42:41.056520 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:41 crc kubenswrapper[5118]: I1208 17:42:41.056649 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:41 crc kubenswrapper[5118]: I1208 17:42:41.056676 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:41 crc kubenswrapper[5118]: E1208 17:42:41.057361 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:42:41 crc kubenswrapper[5118]: I1208 17:42:41.063287 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:42:41 crc kubenswrapper[5118]: E1208 17:42:41.262735 5118 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 08 17:42:41 crc kubenswrapper[5118]: E1208 17:42:41.477032 5118 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 08 17:42:41 crc kubenswrapper[5118]: I1208 17:42:41.545309 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:42:41 crc kubenswrapper[5118]: I1208 17:42:41.546205 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:41 crc kubenswrapper[5118]: I1208 17:42:41.546411 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:41 crc kubenswrapper[5118]: I1208 17:42:41.546630 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:41 crc kubenswrapper[5118]: E1208 17:42:41.547472 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:42:42 crc kubenswrapper[5118]: I1208 17:42:42.226418 5118 trace.go:236] Trace[2030347223]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (08-Dec-2025 17:42:27.213) (total time: 15013ms): Dec 08 17:42:42 crc kubenswrapper[5118]: Trace[2030347223]: ---"Objects listed" error:services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope 15013ms (17:42:42.226) Dec 08 17:42:42 crc kubenswrapper[5118]: Trace[2030347223]: [15.013158021s] [15.013158021s] END Dec 08 17:42:42 crc kubenswrapper[5118]: I1208 17:42:42.226467 5118 trace.go:236] Trace[156586518]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (08-Dec-2025 17:42:27.665) (total time: 14560ms): Dec 08 17:42:42 crc kubenswrapper[5118]: Trace[156586518]: ---"Objects listed" error:runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope 14560ms (17:42:42.226) Dec 08 17:42:42 crc kubenswrapper[5118]: Trace[156586518]: [14.560402839s] [14.560402839s] END Dec 08 17:42:42 crc kubenswrapper[5118]: E1208 17:42:42.226543 5118 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 08 17:42:42 crc kubenswrapper[5118]: E1208 17:42:42.226476 5118 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 08 17:42:42 crc kubenswrapper[5118]: E1208 17:42:42.226599 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f4e5db65f6d9b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:23.351156123 +0000 UTC m=+0.252480247,LastTimestamp:2025-12-08 17:42:23.351156123 +0000 UTC m=+0.252480247,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:42 crc kubenswrapper[5118]: E1208 17:42:42.228796 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f4e5dba34bf71 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:23.415467889 +0000 UTC m=+0.316791983,LastTimestamp:2025-12-08 17:42:23.415467889 +0000 UTC m=+0.316791983,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:42 crc kubenswrapper[5118]: E1208 17:42:42.236783 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f4e5dba3514d7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:23.415489751 +0000 UTC m=+0.316813845,LastTimestamp:2025-12-08 17:42:23.415489751 +0000 UTC m=+0.316813845,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:42 crc kubenswrapper[5118]: E1208 17:42:42.245622 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f4e5dba354515 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:23.415502101 +0000 UTC m=+0.316826185,LastTimestamp:2025-12-08 17:42:23.415502101 +0000 UTC m=+0.316826185,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:42 crc kubenswrapper[5118]: E1208 17:42:42.255399 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f4e5dbd341691 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:23.465756305 +0000 UTC m=+0.367080399,LastTimestamp:2025-12-08 17:42:23.465756305 +0000 UTC m=+0.367080399,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:42 crc kubenswrapper[5118]: I1208 17:42:42.256398 5118 reflector.go:430] "Caches populated" logger="kubernetes.io/kube-apiserver-client-kubelet" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Dec 08 17:42:42 crc kubenswrapper[5118]: E1208 17:42:42.264312 5118 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f4e5dba34bf71\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f4e5dba34bf71 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:23.415467889 +0000 UTC m=+0.316791983,LastTimestamp:2025-12-08 17:42:23.526791051 +0000 UTC m=+0.428115165,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:42 crc kubenswrapper[5118]: E1208 17:42:42.275830 5118 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f4e5dba3514d7\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f4e5dba3514d7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:23.415489751 +0000 UTC m=+0.316813845,LastTimestamp:2025-12-08 17:42:23.526829253 +0000 UTC m=+0.428153367,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:42 crc kubenswrapper[5118]: E1208 17:42:42.284391 5118 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f4e5dba354515\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f4e5dba354515 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:23.415502101 +0000 UTC m=+0.316826185,LastTimestamp:2025-12-08 17:42:23.526846113 +0000 UTC m=+0.428170227,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:42 crc kubenswrapper[5118]: E1208 17:42:42.293151 5118 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f4e5dba34bf71\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f4e5dba34bf71 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:23.415467889 +0000 UTC m=+0.316791983,LastTimestamp:2025-12-08 17:42:23.528603828 +0000 UTC m=+0.429927922,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:42 crc kubenswrapper[5118]: E1208 17:42:42.300914 5118 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f4e5dba3514d7\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f4e5dba3514d7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:23.415489751 +0000 UTC m=+0.316813845,LastTimestamp:2025-12-08 17:42:23.528624699 +0000 UTC m=+0.429948793,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:42 crc kubenswrapper[5118]: E1208 17:42:42.308200 5118 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f4e5dba354515\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f4e5dba354515 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:23.415502101 +0000 UTC m=+0.316826185,LastTimestamp:2025-12-08 17:42:23.528633679 +0000 UTC m=+0.429957773,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:42 crc kubenswrapper[5118]: E1208 17:42:42.316094 5118 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f4e5dba34bf71\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f4e5dba34bf71 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:23.415467889 +0000 UTC m=+0.316791983,LastTimestamp:2025-12-08 17:42:23.52932255 +0000 UTC m=+0.430646654,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:42 crc kubenswrapper[5118]: E1208 17:42:42.324465 5118 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f4e5dba3514d7\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f4e5dba3514d7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:23.415489751 +0000 UTC m=+0.316813845,LastTimestamp:2025-12-08 17:42:23.52933589 +0000 UTC m=+0.430660004,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:42 crc kubenswrapper[5118]: E1208 17:42:42.331853 5118 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f4e5dba354515\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f4e5dba354515 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:23.415502101 +0000 UTC m=+0.316826185,LastTimestamp:2025-12-08 17:42:23.529347411 +0000 UTC m=+0.430671515,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:42 crc kubenswrapper[5118]: E1208 17:42:42.335791 5118 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f4e5dba34bf71\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f4e5dba34bf71 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:23.415467889 +0000 UTC m=+0.316791983,LastTimestamp:2025-12-08 17:42:23.529678365 +0000 UTC m=+0.431002479,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:42 crc kubenswrapper[5118]: E1208 17:42:42.340157 5118 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f4e5dba3514d7\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f4e5dba3514d7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:23.415489751 +0000 UTC m=+0.316813845,LastTimestamp:2025-12-08 17:42:23.529691746 +0000 UTC m=+0.431015850,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:42 crc kubenswrapper[5118]: I1208 17:42:42.340746 5118 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:42:42 crc kubenswrapper[5118]: E1208 17:42:42.342499 5118 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f4e5dba354515\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f4e5dba354515 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:23.415502101 +0000 UTC m=+0.316826185,LastTimestamp:2025-12-08 17:42:23.529703596 +0000 UTC m=+0.431027710,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:42 crc kubenswrapper[5118]: E1208 17:42:42.346560 5118 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f4e5dba34bf71\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f4e5dba34bf71 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:23.415467889 +0000 UTC m=+0.316791983,LastTimestamp:2025-12-08 17:42:23.53026918 +0000 UTC m=+0.431593274,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:42 crc kubenswrapper[5118]: E1208 17:42:42.349840 5118 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f4e5dba3514d7\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f4e5dba3514d7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:23.415489751 +0000 UTC m=+0.316813845,LastTimestamp:2025-12-08 17:42:23.530287031 +0000 UTC m=+0.431611115,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:42 crc kubenswrapper[5118]: E1208 17:42:42.353523 5118 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f4e5dba354515\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f4e5dba354515 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:23.415502101 +0000 UTC m=+0.316826185,LastTimestamp:2025-12-08 17:42:23.530296001 +0000 UTC m=+0.431620095,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:42 crc kubenswrapper[5118]: E1208 17:42:42.356629 5118 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f4e5dba34bf71\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f4e5dba34bf71 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:23.415467889 +0000 UTC m=+0.316791983,LastTimestamp:2025-12-08 17:42:23.531979724 +0000 UTC m=+0.433303818,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:42 crc kubenswrapper[5118]: E1208 17:42:42.360051 5118 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f4e5dba3514d7\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f4e5dba3514d7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:23.415489751 +0000 UTC m=+0.316813845,LastTimestamp:2025-12-08 17:42:23.531998355 +0000 UTC m=+0.433322449,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:42 crc kubenswrapper[5118]: E1208 17:42:42.361028 5118 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f4e5dba354515\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f4e5dba354515 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:23.415502101 +0000 UTC m=+0.316826185,LastTimestamp:2025-12-08 17:42:23.532007245 +0000 UTC m=+0.433331339,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:42 crc kubenswrapper[5118]: E1208 17:42:42.368677 5118 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f4e5dba34bf71\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f4e5dba34bf71 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:23.415467889 +0000 UTC m=+0.316791983,LastTimestamp:2025-12-08 17:42:23.5323522 +0000 UTC m=+0.433676334,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:42 crc kubenswrapper[5118]: E1208 17:42:42.375588 5118 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f4e5dba3514d7\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f4e5dba3514d7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:23.415489751 +0000 UTC m=+0.316813845,LastTimestamp:2025-12-08 17:42:23.532377461 +0000 UTC m=+0.433701595,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:42 crc kubenswrapper[5118]: E1208 17:42:42.390972 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.187f4e5dd6d1ae09 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:23.895514633 +0000 UTC m=+0.796838717,LastTimestamp:2025-12-08 17:42:23.895514633 +0000 UTC m=+0.796838717,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:42 crc kubenswrapper[5118]: E1208 17:42:42.400217 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187f4e5dd7310b0a openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:23.901764362 +0000 UTC m=+0.803088456,LastTimestamp:2025-12-08 17:42:23.901764362 +0000 UTC m=+0.803088456,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:42 crc kubenswrapper[5118]: E1208 17:42:42.411253 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f4e5dd8678f2b openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:23.922114347 +0000 UTC m=+0.823438451,LastTimestamp:2025-12-08 17:42:23.922114347 +0000 UTC m=+0.823438451,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:42 crc kubenswrapper[5118]: E1208 17:42:42.421334 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f4e5dd94e0e14 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:23.937220116 +0000 UTC m=+0.838544210,LastTimestamp:2025-12-08 17:42:23.937220116 +0000 UTC m=+0.838544210,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:42 crc kubenswrapper[5118]: E1208 17:42:42.429873 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f4e5dd95ba7fd openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:23.938111485 +0000 UTC m=+0.839435599,LastTimestamp:2025-12-08 17:42:23.938111485 +0000 UTC m=+0.839435599,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:42 crc kubenswrapper[5118]: E1208 17:42:42.442729 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f4e5df25e77e9 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container: kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:24.357726185 +0000 UTC m=+1.259050299,LastTimestamp:2025-12-08 17:42:24.357726185 +0000 UTC m=+1.259050299,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:42 crc kubenswrapper[5118]: E1208 17:42:42.452031 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187f4e5df26baee8 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Created,Message:Created container: wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:24.358592232 +0000 UTC m=+1.259916326,LastTimestamp:2025-12-08 17:42:24.358592232 +0000 UTC m=+1.259916326,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:42 crc kubenswrapper[5118]: E1208 17:42:42.458507 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.187f4e5df2b24e33 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:24.363220531 +0000 UTC m=+1.264544635,LastTimestamp:2025-12-08 17:42:24.363220531 +0000 UTC m=+1.264544635,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:42 crc kubenswrapper[5118]: E1208 17:42:42.463558 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f4e5df2d365e4 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:24.365389284 +0000 UTC m=+1.266713378,LastTimestamp:2025-12-08 17:42:24.365389284 +0000 UTC m=+1.266713378,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:42 crc kubenswrapper[5118]: E1208 17:42:42.469491 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f4e5df2db8a1f openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:24.365922847 +0000 UTC m=+1.267246931,LastTimestamp:2025-12-08 17:42:24.365922847 +0000 UTC m=+1.267246931,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:42 crc kubenswrapper[5118]: E1208 17:42:42.476512 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f4e5df396f93e openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:24.378206526 +0000 UTC m=+1.279530660,LastTimestamp:2025-12-08 17:42:24.378206526 +0000 UTC m=+1.279530660,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:42 crc kubenswrapper[5118]: E1208 17:42:42.485817 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187f4e5df3a29660 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Started,Message:Started container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:24.378967648 +0000 UTC m=+1.280291742,LastTimestamp:2025-12-08 17:42:24.378967648 +0000 UTC m=+1.280291742,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:42 crc kubenswrapper[5118]: I1208 17:42:42.489685 5118 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:34598->192.168.126.11:17697: read: connection reset by peer" start-of-body= Dec 08 17:42:42 crc kubenswrapper[5118]: I1208 17:42:42.489772 5118 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:34598->192.168.126.11:17697: read: connection reset by peer" Dec 08 17:42:42 crc kubenswrapper[5118]: I1208 17:42:42.490277 5118 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Dec 08 17:42:42 crc kubenswrapper[5118]: I1208 17:42:42.490333 5118 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Dec 08 17:42:42 crc kubenswrapper[5118]: E1208 17:42:42.492774 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.187f4e5df3a94aa9 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:24.379407017 +0000 UTC m=+1.280731111,LastTimestamp:2025-12-08 17:42:24.379407017 +0000 UTC m=+1.280731111,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:42 crc kubenswrapper[5118]: E1208 17:42:42.497779 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f4e5df3b8438f openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:24.380388239 +0000 UTC m=+1.281712333,LastTimestamp:2025-12-08 17:42:24.380388239 +0000 UTC m=+1.281712333,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:42 crc kubenswrapper[5118]: E1208 17:42:42.505136 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f4e5df3c9ae94 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:24.381529748 +0000 UTC m=+1.282853882,LastTimestamp:2025-12-08 17:42:24.381529748 +0000 UTC m=+1.282853882,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:42 crc kubenswrapper[5118]: E1208 17:42:42.512153 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f4e5df4754582 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:24.392775042 +0000 UTC m=+1.294099176,LastTimestamp:2025-12-08 17:42:24.392775042 +0000 UTC m=+1.294099176,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:42 crc kubenswrapper[5118]: E1208 17:42:42.518301 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187f4e5df763e036 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:24.441966646 +0000 UTC m=+1.343290740,LastTimestamp:2025-12-08 17:42:24.441966646 +0000 UTC m=+1.343290740,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:42 crc kubenswrapper[5118]: E1208 17:42:42.528937 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f4e5df769d3e0 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:24.442356704 +0000 UTC m=+1.343680808,LastTimestamp:2025-12-08 17:42:24.442356704 +0000 UTC m=+1.343680808,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:42 crc kubenswrapper[5118]: E1208 17:42:42.535093 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.187f4e5df793c9e6 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:24.445106662 +0000 UTC m=+1.346430766,LastTimestamp:2025-12-08 17:42:24.445106662 +0000 UTC m=+1.346430766,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:42 crc kubenswrapper[5118]: E1208 17:42:42.541227 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f4e5e029070dc openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container: cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:24.629436636 +0000 UTC m=+1.530760730,LastTimestamp:2025-12-08 17:42:24.629436636 +0000 UTC m=+1.530760730,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:42 crc kubenswrapper[5118]: I1208 17:42:42.548247 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Dec 08 17:42:42 crc kubenswrapper[5118]: E1208 17:42:42.548791 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f4e5e032db83c openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Created,Message:Created container: etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:24.63974406 +0000 UTC m=+1.541068154,LastTimestamp:2025-12-08 17:42:24.63974406 +0000 UTC m=+1.541068154,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:42 crc kubenswrapper[5118]: I1208 17:42:42.550249 5118 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="607593e5fe21fff98c7d3cba59d9e08df38aeacb3b9a57dc2840751b37b3ab6a" exitCode=255 Dec 08 17:42:42 crc kubenswrapper[5118]: I1208 17:42:42.550308 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"607593e5fe21fff98c7d3cba59d9e08df38aeacb3b9a57dc2840751b37b3ab6a"} Dec 08 17:42:42 crc kubenswrapper[5118]: I1208 17:42:42.550533 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:42:42 crc kubenswrapper[5118]: I1208 17:42:42.551078 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:42 crc kubenswrapper[5118]: I1208 17:42:42.551119 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:42 crc kubenswrapper[5118]: I1208 17:42:42.551133 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:42 crc kubenswrapper[5118]: E1208 17:42:42.551508 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:42:42 crc kubenswrapper[5118]: I1208 17:42:42.551777 5118 scope.go:117] "RemoveContainer" containerID="607593e5fe21fff98c7d3cba59d9e08df38aeacb3b9a57dc2840751b37b3ab6a" Dec 08 17:42:42 crc kubenswrapper[5118]: E1208 17:42:42.552454 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f4e5e0338f26b openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:24.640479851 +0000 UTC m=+1.541803945,LastTimestamp:2025-12-08 17:42:24.640479851 +0000 UTC m=+1.541803945,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:42 crc kubenswrapper[5118]: E1208 17:42:42.556655 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f4e5e034aa7be openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:24.641640382 +0000 UTC m=+1.542964476,LastTimestamp:2025-12-08 17:42:24.641640382 +0000 UTC m=+1.542964476,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:42 crc kubenswrapper[5118]: E1208 17:42:42.560359 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f4e5e042678ab openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Started,Message:Started container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:24.656046251 +0000 UTC m=+1.557370345,LastTimestamp:2025-12-08 17:42:24.656046251 +0000 UTC m=+1.557370345,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:42 crc kubenswrapper[5118]: E1208 17:42:42.565450 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.187f4e5e044f40c4 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:24.658718916 +0000 UTC m=+1.560043010,LastTimestamp:2025-12-08 17:42:24.658718916 +0000 UTC m=+1.560043010,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:42 crc kubenswrapper[5118]: E1208 17:42:42.571129 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187f4e5e045354e9 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Created,Message:Created container: kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:24.658986217 +0000 UTC m=+1.560310311,LastTimestamp:2025-12-08 17:42:24.658986217 +0000 UTC m=+1.560310311,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:42 crc kubenswrapper[5118]: E1208 17:42:42.576055 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187f4e5e04c56a95 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Started,Message:Started container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:24.666462869 +0000 UTC m=+1.567786963,LastTimestamp:2025-12-08 17:42:24.666462869 +0000 UTC m=+1.567786963,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:42 crc kubenswrapper[5118]: E1208 17:42:42.580642 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187f4e5e04da39a5 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:24.667826597 +0000 UTC m=+1.569150691,LastTimestamp:2025-12-08 17:42:24.667826597 +0000 UTC m=+1.569150691,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:42 crc kubenswrapper[5118]: E1208 17:42:42.584936 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.187f4e5e057f6ccd openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:24.678653133 +0000 UTC m=+1.579977227,LastTimestamp:2025-12-08 17:42:24.678653133 +0000 UTC m=+1.579977227,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:42 crc kubenswrapper[5118]: E1208 17:42:42.588656 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187f4e5e10976760 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Created,Message:Created container: kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:24.864773984 +0000 UTC m=+1.766098068,LastTimestamp:2025-12-08 17:42:24.864773984 +0000 UTC m=+1.766098068,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:42 crc kubenswrapper[5118]: E1208 17:42:42.592993 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187f4e5e116740f8 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Started,Message:Started container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:24.87839564 +0000 UTC m=+1.779719734,LastTimestamp:2025-12-08 17:42:24.87839564 +0000 UTC m=+1.779719734,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:42 crc kubenswrapper[5118]: E1208 17:42:42.597815 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187f4e5e1174ddd9 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:24.879287769 +0000 UTC m=+1.780611863,LastTimestamp:2025-12-08 17:42:24.879287769 +0000 UTC m=+1.780611863,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:42 crc kubenswrapper[5118]: E1208 17:42:42.603246 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187f4e5e1cdeab2f openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Created,Message:Created container: kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:25.070770991 +0000 UTC m=+1.972095085,LastTimestamp:2025-12-08 17:42:25.070770991 +0000 UTC m=+1.972095085,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:42 crc kubenswrapper[5118]: E1208 17:42:42.608528 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187f4e5e1d798b92 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Started,Message:Started container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:25.080920978 +0000 UTC m=+1.982245072,LastTimestamp:2025-12-08 17:42:25.080920978 +0000 UTC m=+1.982245072,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:42 crc kubenswrapper[5118]: E1208 17:42:42.614356 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f4e5e1e4aabe4 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Created,Message:Created container: kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:25.094626276 +0000 UTC m=+1.995950370,LastTimestamp:2025-12-08 17:42:25.094626276 +0000 UTC m=+1.995950370,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:42 crc kubenswrapper[5118]: E1208 17:42:42.619661 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f4e5e1ece3b8b openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Started,Message:Started container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:25.103248267 +0000 UTC m=+2.004572361,LastTimestamp:2025-12-08 17:42:25.103248267 +0000 UTC m=+2.004572361,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:42 crc kubenswrapper[5118]: E1208 17:42:42.624843 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f4e5e1f0f6213 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:25.107517971 +0000 UTC m=+2.008842055,LastTimestamp:2025-12-08 17:42:25.107517971 +0000 UTC m=+2.008842055,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:42 crc kubenswrapper[5118]: E1208 17:42:42.629493 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f4e5e2be10232 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Created,Message:Created container: kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:25.322582578 +0000 UTC m=+2.223906672,LastTimestamp:2025-12-08 17:42:25.322582578 +0000 UTC m=+2.223906672,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:42 crc kubenswrapper[5118]: E1208 17:42:42.634648 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f4e5e2c645339 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Started,Message:Started container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:25.331188537 +0000 UTC m=+2.232512631,LastTimestamp:2025-12-08 17:42:25.331188537 +0000 UTC m=+2.232512631,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:42 crc kubenswrapper[5118]: E1208 17:42:42.641262 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f4e5e348376d3 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:25.467446995 +0000 UTC m=+2.368771089,LastTimestamp:2025-12-08 17:42:25.467446995 +0000 UTC m=+2.368771089,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:42 crc kubenswrapper[5118]: E1208 17:42:42.646066 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f4e5e3538e105 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:25.479336197 +0000 UTC m=+2.380660291,LastTimestamp:2025-12-08 17:42:25.479336197 +0000 UTC m=+2.380660291,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:42 crc kubenswrapper[5118]: E1208 17:42:42.650242 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f4e5e43602c75 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Created,Message:Created container: etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:25.716792437 +0000 UTC m=+2.618116531,LastTimestamp:2025-12-08 17:42:25.716792437 +0000 UTC m=+2.618116531,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:42 crc kubenswrapper[5118]: E1208 17:42:42.653531 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f4e5e4368ccd1 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Created,Message:Created container: kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:25.717357777 +0000 UTC m=+2.618681871,LastTimestamp:2025-12-08 17:42:25.717357777 +0000 UTC m=+2.618681871,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:42 crc kubenswrapper[5118]: E1208 17:42:42.657472 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f4e5e4413be98 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Started,Message:Started container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:25.728560792 +0000 UTC m=+2.629884886,LastTimestamp:2025-12-08 17:42:25.728560792 +0000 UTC m=+2.629884886,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:42 crc kubenswrapper[5118]: E1208 17:42:42.661894 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f4e5e4417ea0d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Started,Message:Started container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:25.728834061 +0000 UTC m=+2.630158155,LastTimestamp:2025-12-08 17:42:25.728834061 +0000 UTC m=+2.630158155,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:42 crc kubenswrapper[5118]: E1208 17:42:42.665456 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f4e5e4423f9bd openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:25.729624509 +0000 UTC m=+2.630948603,LastTimestamp:2025-12-08 17:42:25.729624509 +0000 UTC m=+2.630948603,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:42 crc kubenswrapper[5118]: E1208 17:42:42.669009 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f4e5e4f8194ab openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Created,Message:Created container: kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:25.920308395 +0000 UTC m=+2.821632499,LastTimestamp:2025-12-08 17:42:25.920308395 +0000 UTC m=+2.821632499,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:42 crc kubenswrapper[5118]: E1208 17:42:42.674541 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f4e5e502df368 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Started,Message:Started container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:25.93160484 +0000 UTC m=+2.832928944,LastTimestamp:2025-12-08 17:42:25.93160484 +0000 UTC m=+2.832928944,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:42 crc kubenswrapper[5118]: E1208 17:42:42.678063 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f4e5e5048f4fb openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:25.933374715 +0000 UTC m=+2.834698819,LastTimestamp:2025-12-08 17:42:25.933374715 +0000 UTC m=+2.834698819,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:42 crc kubenswrapper[5118]: E1208 17:42:42.681447 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f4e5e5b04449e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Created,Message:Created container: kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:26.113422494 +0000 UTC m=+3.014746598,LastTimestamp:2025-12-08 17:42:26.113422494 +0000 UTC m=+3.014746598,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:42 crc kubenswrapper[5118]: E1208 17:42:42.684941 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f4e5e5bb1c2db openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Started,Message:Started container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:26.124792539 +0000 UTC m=+3.026116653,LastTimestamp:2025-12-08 17:42:26.124792539 +0000 UTC m=+3.026116653,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:42 crc kubenswrapper[5118]: E1208 17:42:42.690221 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f4e5e5bc1fe65 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:26.125856357 +0000 UTC m=+3.027180451,LastTimestamp:2025-12-08 17:42:26.125856357 +0000 UTC m=+3.027180451,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:42 crc kubenswrapper[5118]: E1208 17:42:42.693938 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f4e5e67743772 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Created,Message:Created container: kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:26.322085746 +0000 UTC m=+3.223409850,LastTimestamp:2025-12-08 17:42:26.322085746 +0000 UTC m=+3.223409850,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:42 crc kubenswrapper[5118]: E1208 17:42:42.697565 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f4e5e6820522e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Started,Message:Started container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:26.333364782 +0000 UTC m=+3.234688876,LastTimestamp:2025-12-08 17:42:26.333364782 +0000 UTC m=+3.234688876,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:42 crc kubenswrapper[5118]: E1208 17:42:42.701173 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f4e5e6830ca1a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:26.334444058 +0000 UTC m=+3.235768162,LastTimestamp:2025-12-08 17:42:26.334444058 +0000 UTC m=+3.235768162,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:42 crc kubenswrapper[5118]: E1208 17:42:42.705124 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f4e5e70e848ce openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:26.48068731 +0000 UTC m=+3.382011414,LastTimestamp:2025-12-08 17:42:26.48068731 +0000 UTC m=+3.382011414,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:42 crc kubenswrapper[5118]: E1208 17:42:42.709396 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f4e5e72d8b39f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:26.513220511 +0000 UTC m=+3.414544625,LastTimestamp:2025-12-08 17:42:26.513220511 +0000 UTC m=+3.414544625,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:42 crc kubenswrapper[5118]: E1208 17:42:42.714050 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f4e5e7389c57c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:26.524824956 +0000 UTC m=+3.426149060,LastTimestamp:2025-12-08 17:42:26.524824956 +0000 UTC m=+3.426149060,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:42 crc kubenswrapper[5118]: E1208 17:42:42.717907 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f4e5e7ca14808 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Created,Message:Created container: etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:26.677360648 +0000 UTC m=+3.578684742,LastTimestamp:2025-12-08 17:42:26.677360648 +0000 UTC m=+3.578684742,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:42 crc kubenswrapper[5118]: E1208 17:42:42.721185 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f4e5e7d5568f1 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Started,Message:Started container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:26.689165553 +0000 UTC m=+3.590489657,LastTimestamp:2025-12-08 17:42:26.689165553 +0000 UTC m=+3.590489657,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:42 crc kubenswrapper[5118]: E1208 17:42:42.724442 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f4e5e7d62b95e openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:26.69003811 +0000 UTC m=+3.591362204,LastTimestamp:2025-12-08 17:42:26.69003811 +0000 UTC m=+3.591362204,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:42 crc kubenswrapper[5118]: E1208 17:42:42.727658 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f4e5e8af595ad openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Created,Message:Created container: etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:26.917766573 +0000 UTC m=+3.819090667,LastTimestamp:2025-12-08 17:42:26.917766573 +0000 UTC m=+3.819090667,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:42 crc kubenswrapper[5118]: E1208 17:42:42.731608 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f4e5e8bc9d4e6 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Started,Message:Started container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:26.93167639 +0000 UTC m=+3.833000484,LastTimestamp:2025-12-08 17:42:26.93167639 +0000 UTC m=+3.833000484,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:42 crc kubenswrapper[5118]: E1208 17:42:42.735828 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f4e5e8bd5fed6 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:26.932473558 +0000 UTC m=+3.833797652,LastTimestamp:2025-12-08 17:42:26.932473558 +0000 UTC m=+3.833797652,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:42 crc kubenswrapper[5118]: E1208 17:42:42.740147 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f4e5e97e75239 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Created,Message:Created container: etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:27.134935609 +0000 UTC m=+4.036259733,LastTimestamp:2025-12-08 17:42:27.134935609 +0000 UTC m=+4.036259733,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:42 crc kubenswrapper[5118]: E1208 17:42:42.744046 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f4e5e98d16228 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Started,Message:Started container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:27.150275112 +0000 UTC m=+4.051599246,LastTimestamp:2025-12-08 17:42:27.150275112 +0000 UTC m=+4.051599246,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:42 crc kubenswrapper[5118]: E1208 17:42:42.748293 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f4e5e98e5d52e openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:27.151615278 +0000 UTC m=+4.052939372,LastTimestamp:2025-12-08 17:42:27.151615278 +0000 UTC m=+4.052939372,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:42 crc kubenswrapper[5118]: E1208 17:42:42.751752 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f4e5ea5689ca8 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Created,Message:Created container: etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:27.361512616 +0000 UTC m=+4.262836740,LastTimestamp:2025-12-08 17:42:27.361512616 +0000 UTC m=+4.262836740,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:42 crc kubenswrapper[5118]: E1208 17:42:42.757582 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f4e5ea60b52f0 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Started,Message:Started container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:27.372176112 +0000 UTC m=+4.273500206,LastTimestamp:2025-12-08 17:42:27.372176112 +0000 UTC m=+4.273500206,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:42 crc kubenswrapper[5118]: E1208 17:42:42.762988 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f4e5ea61eb2bb openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:27.373445819 +0000 UTC m=+4.274769913,LastTimestamp:2025-12-08 17:42:27.373445819 +0000 UTC m=+4.274769913,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:42 crc kubenswrapper[5118]: E1208 17:42:42.767144 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f4e5eb07e0d69 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Created,Message:Created container: etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:27.547467113 +0000 UTC m=+4.448791227,LastTimestamp:2025-12-08 17:42:27.547467113 +0000 UTC m=+4.448791227,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:42 crc kubenswrapper[5118]: E1208 17:42:42.772592 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f4e5eb1348da3 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Started,Message:Started container etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:27.559427491 +0000 UTC m=+4.460751585,LastTimestamp:2025-12-08 17:42:27.559427491 +0000 UTC m=+4.460751585,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:42 crc kubenswrapper[5118]: E1208 17:42:42.779450 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Dec 08 17:42:42 crc kubenswrapper[5118]: &Event{ObjectMeta:{kube-apiserver-crc.187f4e60d7e72867 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Liveness probe error: Get "https://192.168.126.11:17697/healthz": dial tcp 192.168.126.11:17697: connect: connection refused Dec 08 17:42:42 crc kubenswrapper[5118]: body: Dec 08 17:42:42 crc kubenswrapper[5118]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:36.798601319 +0000 UTC m=+13.699925463,LastTimestamp:2025-12-08 17:42:36.798601319 +0000 UTC m=+13.699925463,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Dec 08 17:42:42 crc kubenswrapper[5118]: > Dec 08 17:42:42 crc kubenswrapper[5118]: E1208 17:42:42.783917 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f4e60d7e8d76e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Liveness probe failed: Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:36.798711662 +0000 UTC m=+13.700035806,LastTimestamp:2025-12-08 17:42:36.798711662 +0000 UTC m=+13.700035806,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:42 crc kubenswrapper[5118]: E1208 17:42:42.788402 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Dec 08 17:42:42 crc kubenswrapper[5118]: &Event{ObjectMeta:{kube-apiserver-crc.187f4e60f202532c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Dec 08 17:42:42 crc kubenswrapper[5118]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 08 17:42:42 crc kubenswrapper[5118]: Dec 08 17:42:42 crc kubenswrapper[5118]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:37.236589356 +0000 UTC m=+14.137913450,LastTimestamp:2025-12-08 17:42:37.236589356 +0000 UTC m=+14.137913450,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Dec 08 17:42:42 crc kubenswrapper[5118]: > Dec 08 17:42:42 crc kubenswrapper[5118]: E1208 17:42:42.792920 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f4e60f20303c7 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:37.236634567 +0000 UTC m=+14.137958681,LastTimestamp:2025-12-08 17:42:37.236634567 +0000 UTC m=+14.137958681,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:42 crc kubenswrapper[5118]: E1208 17:42:42.797681 5118 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f4e60f202532c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Dec 08 17:42:42 crc kubenswrapper[5118]: &Event{ObjectMeta:{kube-apiserver-crc.187f4e60f202532c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Dec 08 17:42:42 crc kubenswrapper[5118]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 08 17:42:42 crc kubenswrapper[5118]: Dec 08 17:42:42 crc kubenswrapper[5118]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:37.236589356 +0000 UTC m=+14.137913450,LastTimestamp:2025-12-08 17:42:37.244743959 +0000 UTC m=+14.146068053,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Dec 08 17:42:42 crc kubenswrapper[5118]: > Dec 08 17:42:42 crc kubenswrapper[5118]: E1208 17:42:42.803539 5118 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f4e60f20303c7\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f4e60f20303c7 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:37.236634567 +0000 UTC m=+14.137958681,LastTimestamp:2025-12-08 17:42:37.24478858 +0000 UTC m=+14.146112684,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:42 crc kubenswrapper[5118]: E1208 17:42:42.810665 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Dec 08 17:42:42 crc kubenswrapper[5118]: &Event{ObjectMeta:{kube-controller-manager-crc.187f4e6138cd6ea2 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://localhost:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Dec 08 17:42:42 crc kubenswrapper[5118]: body: Dec 08 17:42:42 crc kubenswrapper[5118]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:38.424305314 +0000 UTC m=+15.325629408,LastTimestamp:2025-12-08 17:42:38.424305314 +0000 UTC m=+15.325629408,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Dec 08 17:42:42 crc kubenswrapper[5118]: > Dec 08 17:42:42 crc kubenswrapper[5118]: E1208 17:42:42.815334 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f4e6138ce29e7 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:38.424353255 +0000 UTC m=+15.325677349,LastTimestamp:2025-12-08 17:42:38.424353255 +0000 UTC m=+15.325677349,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:42 crc kubenswrapper[5118]: E1208 17:42:42.819930 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Dec 08 17:42:42 crc kubenswrapper[5118]: &Event{ObjectMeta:{kube-apiserver-crc.187f4e622b1f0cf8 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Readiness probe error: Get "https://192.168.126.11:17697/healthz": read tcp 192.168.126.11:34598->192.168.126.11:17697: read: connection reset by peer Dec 08 17:42:42 crc kubenswrapper[5118]: body: Dec 08 17:42:42 crc kubenswrapper[5118]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:42.489740536 +0000 UTC m=+19.391064660,LastTimestamp:2025-12-08 17:42:42.489740536 +0000 UTC m=+19.391064660,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Dec 08 17:42:42 crc kubenswrapper[5118]: > Dec 08 17:42:42 crc kubenswrapper[5118]: E1208 17:42:42.823325 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f4e622b1fff29 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:34598->192.168.126.11:17697: read: connection reset by peer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:42.489802537 +0000 UTC m=+19.391126641,LastTimestamp:2025-12-08 17:42:42.489802537 +0000 UTC m=+19.391126641,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:42 crc kubenswrapper[5118]: E1208 17:42:42.827770 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Dec 08 17:42:42 crc kubenswrapper[5118]: &Event{ObjectMeta:{kube-apiserver-crc.187f4e622b27c4f1 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Readiness probe error: Get "https://192.168.126.11:17697/healthz": dial tcp 192.168.126.11:17697: connect: connection refused Dec 08 17:42:42 crc kubenswrapper[5118]: body: Dec 08 17:42:42 crc kubenswrapper[5118]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:42.490311921 +0000 UTC m=+19.391636025,LastTimestamp:2025-12-08 17:42:42.490311921 +0000 UTC m=+19.391636025,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Dec 08 17:42:42 crc kubenswrapper[5118]: > Dec 08 17:42:42 crc kubenswrapper[5118]: E1208 17:42:42.833096 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f4e622b287104 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:42.490355972 +0000 UTC m=+19.391680076,LastTimestamp:2025-12-08 17:42:42.490355972 +0000 UTC m=+19.391680076,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:42 crc kubenswrapper[5118]: E1208 17:42:42.838218 5118 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f4e5e6830ca1a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f4e5e6830ca1a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:26.334444058 +0000 UTC m=+3.235768162,LastTimestamp:2025-12-08 17:42:42.552680412 +0000 UTC m=+19.454004506,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:42 crc kubenswrapper[5118]: E1208 17:42:42.843212 5118 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f4e5e72d8b39f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f4e5e72d8b39f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:26.513220511 +0000 UTC m=+3.414544625,LastTimestamp:2025-12-08 17:42:42.722782162 +0000 UTC m=+19.624106256,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:42 crc kubenswrapper[5118]: E1208 17:42:42.847077 5118 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f4e5e7389c57c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f4e5e7389c57c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:26.524824956 +0000 UTC m=+3.426149060,LastTimestamp:2025-12-08 17:42:42.734152432 +0000 UTC m=+19.635476526,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:43 crc kubenswrapper[5118]: I1208 17:42:43.341208 5118 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:42:43 crc kubenswrapper[5118]: E1208 17:42:43.468321 5118 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 08 17:42:43 crc kubenswrapper[5118]: I1208 17:42:43.554501 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Dec 08 17:42:43 crc kubenswrapper[5118]: I1208 17:42:43.556392 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"64c8edc57f23085b50a88ed4e2bb4f5159ddfdc29c0de05dbaa5a457f5d826d5"} Dec 08 17:42:43 crc kubenswrapper[5118]: I1208 17:42:43.556678 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:42:43 crc kubenswrapper[5118]: I1208 17:42:43.557393 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:43 crc kubenswrapper[5118]: I1208 17:42:43.557441 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:43 crc kubenswrapper[5118]: I1208 17:42:43.557456 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:43 crc kubenswrapper[5118]: E1208 17:42:43.557908 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:42:44 crc kubenswrapper[5118]: I1208 17:42:44.341755 5118 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:42:44 crc kubenswrapper[5118]: I1208 17:42:44.560311 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Dec 08 17:42:44 crc kubenswrapper[5118]: I1208 17:42:44.560797 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Dec 08 17:42:44 crc kubenswrapper[5118]: I1208 17:42:44.562774 5118 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="64c8edc57f23085b50a88ed4e2bb4f5159ddfdc29c0de05dbaa5a457f5d826d5" exitCode=255 Dec 08 17:42:44 crc kubenswrapper[5118]: I1208 17:42:44.562838 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"64c8edc57f23085b50a88ed4e2bb4f5159ddfdc29c0de05dbaa5a457f5d826d5"} Dec 08 17:42:44 crc kubenswrapper[5118]: I1208 17:42:44.562929 5118 scope.go:117] "RemoveContainer" containerID="607593e5fe21fff98c7d3cba59d9e08df38aeacb3b9a57dc2840751b37b3ab6a" Dec 08 17:42:44 crc kubenswrapper[5118]: I1208 17:42:44.563181 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:42:44 crc kubenswrapper[5118]: I1208 17:42:44.563720 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:44 crc kubenswrapper[5118]: I1208 17:42:44.563758 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:44 crc kubenswrapper[5118]: I1208 17:42:44.563772 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:44 crc kubenswrapper[5118]: E1208 17:42:44.564136 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:42:44 crc kubenswrapper[5118]: I1208 17:42:44.564405 5118 scope.go:117] "RemoveContainer" containerID="64c8edc57f23085b50a88ed4e2bb4f5159ddfdc29c0de05dbaa5a457f5d826d5" Dec 08 17:42:44 crc kubenswrapper[5118]: E1208 17:42:44.564653 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 08 17:42:44 crc kubenswrapper[5118]: E1208 17:42:44.571835 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f4e62a6cafa8e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:44.564605582 +0000 UTC m=+21.465929676,LastTimestamp:2025-12-08 17:42:44.564605582 +0000 UTC m=+21.465929676,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:45 crc kubenswrapper[5118]: I1208 17:42:45.345453 5118 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:42:45 crc kubenswrapper[5118]: I1208 17:42:45.427228 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 17:42:45 crc kubenswrapper[5118]: I1208 17:42:45.427446 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:42:45 crc kubenswrapper[5118]: I1208 17:42:45.428273 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:45 crc kubenswrapper[5118]: I1208 17:42:45.428333 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:45 crc kubenswrapper[5118]: I1208 17:42:45.428350 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:45 crc kubenswrapper[5118]: E1208 17:42:45.428655 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:42:45 crc kubenswrapper[5118]: I1208 17:42:45.431201 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 17:42:45 crc kubenswrapper[5118]: I1208 17:42:45.568042 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Dec 08 17:42:45 crc kubenswrapper[5118]: I1208 17:42:45.570848 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:42:45 crc kubenswrapper[5118]: I1208 17:42:45.571712 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:45 crc kubenswrapper[5118]: I1208 17:42:45.571756 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:45 crc kubenswrapper[5118]: I1208 17:42:45.571769 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:45 crc kubenswrapper[5118]: E1208 17:42:45.572219 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:42:45 crc kubenswrapper[5118]: E1208 17:42:45.706706 5118 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 08 17:42:45 crc kubenswrapper[5118]: E1208 17:42:45.978978 5118 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Dec 08 17:42:46 crc kubenswrapper[5118]: I1208 17:42:46.197027 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:42:46 crc kubenswrapper[5118]: I1208 17:42:46.198038 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:46 crc kubenswrapper[5118]: I1208 17:42:46.198092 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:46 crc kubenswrapper[5118]: I1208 17:42:46.198108 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:46 crc kubenswrapper[5118]: I1208 17:42:46.198279 5118 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 08 17:42:46 crc kubenswrapper[5118]: E1208 17:42:46.210903 5118 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 08 17:42:46 crc kubenswrapper[5118]: I1208 17:42:46.345103 5118 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:42:46 crc kubenswrapper[5118]: E1208 17:42:46.485112 5118 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 08 17:42:46 crc kubenswrapper[5118]: I1208 17:42:46.797653 5118 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:42:46 crc kubenswrapper[5118]: I1208 17:42:46.798075 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:42:46 crc kubenswrapper[5118]: I1208 17:42:46.799224 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:46 crc kubenswrapper[5118]: I1208 17:42:46.799280 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:46 crc kubenswrapper[5118]: I1208 17:42:46.799293 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:46 crc kubenswrapper[5118]: E1208 17:42:46.800083 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:42:46 crc kubenswrapper[5118]: I1208 17:42:46.800447 5118 scope.go:117] "RemoveContainer" containerID="64c8edc57f23085b50a88ed4e2bb4f5159ddfdc29c0de05dbaa5a457f5d826d5" Dec 08 17:42:46 crc kubenswrapper[5118]: E1208 17:42:46.800751 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 08 17:42:46 crc kubenswrapper[5118]: E1208 17:42:46.806341 5118 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f4e62a6cafa8e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f4e62a6cafa8e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:44.564605582 +0000 UTC m=+21.465929676,LastTimestamp:2025-12-08 17:42:46.800700176 +0000 UTC m=+23.702024280,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:47 crc kubenswrapper[5118]: I1208 17:42:47.344060 5118 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:42:48 crc kubenswrapper[5118]: I1208 17:42:48.342422 5118 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:42:49 crc kubenswrapper[5118]: E1208 17:42:49.061484 5118 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 08 17:42:49 crc kubenswrapper[5118]: I1208 17:42:49.343270 5118 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:42:50 crc kubenswrapper[5118]: I1208 17:42:50.345299 5118 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:42:51 crc kubenswrapper[5118]: I1208 17:42:51.346784 5118 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:42:51 crc kubenswrapper[5118]: E1208 17:42:51.895702 5118 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 08 17:42:52 crc kubenswrapper[5118]: E1208 17:42:52.129597 5118 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 08 17:42:52 crc kubenswrapper[5118]: I1208 17:42:52.344558 5118 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:42:52 crc kubenswrapper[5118]: E1208 17:42:52.984560 5118 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Dec 08 17:42:53 crc kubenswrapper[5118]: I1208 17:42:53.211627 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:42:53 crc kubenswrapper[5118]: I1208 17:42:53.213103 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:53 crc kubenswrapper[5118]: I1208 17:42:53.213178 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:53 crc kubenswrapper[5118]: I1208 17:42:53.213199 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:53 crc kubenswrapper[5118]: I1208 17:42:53.213237 5118 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 08 17:42:53 crc kubenswrapper[5118]: E1208 17:42:53.236189 5118 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 08 17:42:53 crc kubenswrapper[5118]: I1208 17:42:53.345183 5118 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:42:53 crc kubenswrapper[5118]: E1208 17:42:53.468726 5118 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 08 17:42:53 crc kubenswrapper[5118]: I1208 17:42:53.556798 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:42:53 crc kubenswrapper[5118]: I1208 17:42:53.557480 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:42:53 crc kubenswrapper[5118]: I1208 17:42:53.558678 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:53 crc kubenswrapper[5118]: I1208 17:42:53.558724 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:53 crc kubenswrapper[5118]: I1208 17:42:53.558744 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:53 crc kubenswrapper[5118]: E1208 17:42:53.559368 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:42:53 crc kubenswrapper[5118]: I1208 17:42:53.559873 5118 scope.go:117] "RemoveContainer" containerID="64c8edc57f23085b50a88ed4e2bb4f5159ddfdc29c0de05dbaa5a457f5d826d5" Dec 08 17:42:53 crc kubenswrapper[5118]: E1208 17:42:53.560285 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 08 17:42:53 crc kubenswrapper[5118]: E1208 17:42:53.566924 5118 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f4e62a6cafa8e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f4e62a6cafa8e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:44.564605582 +0000 UTC m=+21.465929676,LastTimestamp:2025-12-08 17:42:53.560231466 +0000 UTC m=+30.461555590,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:54 crc kubenswrapper[5118]: I1208 17:42:54.344524 5118 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:42:55 crc kubenswrapper[5118]: I1208 17:42:55.343409 5118 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:42:56 crc kubenswrapper[5118]: I1208 17:42:56.344949 5118 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:42:56 crc kubenswrapper[5118]: E1208 17:42:56.928594 5118 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 08 17:42:57 crc kubenswrapper[5118]: I1208 17:42:57.342992 5118 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:42:58 crc kubenswrapper[5118]: I1208 17:42:58.344733 5118 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:42:59 crc kubenswrapper[5118]: I1208 17:42:59.343724 5118 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:43:00 crc kubenswrapper[5118]: I1208 17:43:00.536366 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:43:00 crc kubenswrapper[5118]: I1208 17:43:00.539714 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:43:00 crc kubenswrapper[5118]: I1208 17:43:00.539819 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:43:00 crc kubenswrapper[5118]: I1208 17:43:00.539848 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:43:00 crc kubenswrapper[5118]: I1208 17:43:00.539993 5118 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 08 17:43:00 crc kubenswrapper[5118]: E1208 17:43:00.545649 5118 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Dec 08 17:43:00 crc kubenswrapper[5118]: I1208 17:43:00.545671 5118 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:43:00 crc kubenswrapper[5118]: E1208 17:43:00.547510 5118 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 08 17:43:01 crc kubenswrapper[5118]: I1208 17:43:01.346083 5118 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:43:02 crc kubenswrapper[5118]: I1208 17:43:02.345152 5118 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:43:03 crc kubenswrapper[5118]: I1208 17:43:03.345602 5118 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:43:03 crc kubenswrapper[5118]: E1208 17:43:03.469050 5118 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 08 17:43:04 crc kubenswrapper[5118]: I1208 17:43:04.345507 5118 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:43:04 crc kubenswrapper[5118]: I1208 17:43:04.426068 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:43:04 crc kubenswrapper[5118]: I1208 17:43:04.427226 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:43:04 crc kubenswrapper[5118]: I1208 17:43:04.427306 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:43:04 crc kubenswrapper[5118]: I1208 17:43:04.427387 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:43:04 crc kubenswrapper[5118]: E1208 17:43:04.428155 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:43:04 crc kubenswrapper[5118]: I1208 17:43:04.428628 5118 scope.go:117] "RemoveContainer" containerID="64c8edc57f23085b50a88ed4e2bb4f5159ddfdc29c0de05dbaa5a457f5d826d5" Dec 08 17:43:04 crc kubenswrapper[5118]: E1208 17:43:04.433061 5118 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f4e5e6830ca1a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f4e5e6830ca1a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:26.334444058 +0000 UTC m=+3.235768162,LastTimestamp:2025-12-08 17:43:04.430818321 +0000 UTC m=+41.332142415,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:43:04 crc kubenswrapper[5118]: E1208 17:43:04.678062 5118 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f4e5e72d8b39f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f4e5e72d8b39f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:26.513220511 +0000 UTC m=+3.414544625,LastTimestamp:2025-12-08 17:43:04.671738583 +0000 UTC m=+41.573062717,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:43:04 crc kubenswrapper[5118]: E1208 17:43:04.690825 5118 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f4e5e7389c57c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f4e5e7389c57c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:26.524824956 +0000 UTC m=+3.426149060,LastTimestamp:2025-12-08 17:43:04.684752018 +0000 UTC m=+41.586076122,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:43:05 crc kubenswrapper[5118]: I1208 17:43:05.343243 5118 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:43:05 crc kubenswrapper[5118]: I1208 17:43:05.633838 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Dec 08 17:43:05 crc kubenswrapper[5118]: I1208 17:43:05.636327 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"a916deeb8c941c77bf09446a38a73372d11667759e035a3184d83062a800c06d"} Dec 08 17:43:05 crc kubenswrapper[5118]: I1208 17:43:05.636683 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:43:05 crc kubenswrapper[5118]: I1208 17:43:05.637638 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:43:05 crc kubenswrapper[5118]: I1208 17:43:05.637706 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:43:05 crc kubenswrapper[5118]: I1208 17:43:05.637732 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:43:05 crc kubenswrapper[5118]: E1208 17:43:05.638445 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:43:06 crc kubenswrapper[5118]: I1208 17:43:06.345194 5118 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:43:06 crc kubenswrapper[5118]: I1208 17:43:06.641663 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Dec 08 17:43:06 crc kubenswrapper[5118]: I1208 17:43:06.643496 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Dec 08 17:43:06 crc kubenswrapper[5118]: I1208 17:43:06.646315 5118 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="a916deeb8c941c77bf09446a38a73372d11667759e035a3184d83062a800c06d" exitCode=255 Dec 08 17:43:06 crc kubenswrapper[5118]: I1208 17:43:06.646423 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"a916deeb8c941c77bf09446a38a73372d11667759e035a3184d83062a800c06d"} Dec 08 17:43:06 crc kubenswrapper[5118]: I1208 17:43:06.646516 5118 scope.go:117] "RemoveContainer" containerID="64c8edc57f23085b50a88ed4e2bb4f5159ddfdc29c0de05dbaa5a457f5d826d5" Dec 08 17:43:06 crc kubenswrapper[5118]: I1208 17:43:06.646980 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:43:06 crc kubenswrapper[5118]: I1208 17:43:06.647939 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:43:06 crc kubenswrapper[5118]: I1208 17:43:06.648143 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:43:06 crc kubenswrapper[5118]: I1208 17:43:06.649284 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:43:06 crc kubenswrapper[5118]: E1208 17:43:06.649994 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:43:06 crc kubenswrapper[5118]: I1208 17:43:06.650563 5118 scope.go:117] "RemoveContainer" containerID="a916deeb8c941c77bf09446a38a73372d11667759e035a3184d83062a800c06d" Dec 08 17:43:06 crc kubenswrapper[5118]: E1208 17:43:06.651170 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 08 17:43:06 crc kubenswrapper[5118]: E1208 17:43:06.660399 5118 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f4e62a6cafa8e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f4e62a6cafa8e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:44.564605582 +0000 UTC m=+21.465929676,LastTimestamp:2025-12-08 17:43:06.651074043 +0000 UTC m=+43.552398167,Count:4,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:43:06 crc kubenswrapper[5118]: E1208 17:43:06.693088 5118 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 08 17:43:06 crc kubenswrapper[5118]: I1208 17:43:06.797675 5118 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:43:07 crc kubenswrapper[5118]: I1208 17:43:07.345052 5118 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:43:07 crc kubenswrapper[5118]: I1208 17:43:07.548187 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:43:07 crc kubenswrapper[5118]: I1208 17:43:07.549658 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:43:07 crc kubenswrapper[5118]: I1208 17:43:07.549731 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:43:07 crc kubenswrapper[5118]: I1208 17:43:07.549751 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:43:07 crc kubenswrapper[5118]: I1208 17:43:07.549797 5118 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 08 17:43:07 crc kubenswrapper[5118]: E1208 17:43:07.556937 5118 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Dec 08 17:43:07 crc kubenswrapper[5118]: E1208 17:43:07.563642 5118 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 08 17:43:07 crc kubenswrapper[5118]: I1208 17:43:07.652176 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Dec 08 17:43:07 crc kubenswrapper[5118]: I1208 17:43:07.655126 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:43:07 crc kubenswrapper[5118]: I1208 17:43:07.655942 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:43:07 crc kubenswrapper[5118]: I1208 17:43:07.656006 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:43:07 crc kubenswrapper[5118]: I1208 17:43:07.656033 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:43:07 crc kubenswrapper[5118]: E1208 17:43:07.656649 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:43:07 crc kubenswrapper[5118]: I1208 17:43:07.657101 5118 scope.go:117] "RemoveContainer" containerID="a916deeb8c941c77bf09446a38a73372d11667759e035a3184d83062a800c06d" Dec 08 17:43:07 crc kubenswrapper[5118]: E1208 17:43:07.657631 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 08 17:43:07 crc kubenswrapper[5118]: E1208 17:43:07.665982 5118 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f4e62a6cafa8e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f4e62a6cafa8e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:44.564605582 +0000 UTC m=+21.465929676,LastTimestamp:2025-12-08 17:43:07.657371412 +0000 UTC m=+44.558695546,Count:5,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:43:08 crc kubenswrapper[5118]: I1208 17:43:08.345036 5118 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:43:09 crc kubenswrapper[5118]: I1208 17:43:09.342542 5118 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:43:10 crc kubenswrapper[5118]: E1208 17:43:10.170499 5118 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 08 17:43:10 crc kubenswrapper[5118]: I1208 17:43:10.344442 5118 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:43:10 crc kubenswrapper[5118]: E1208 17:43:10.714596 5118 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 08 17:43:11 crc kubenswrapper[5118]: E1208 17:43:11.104800 5118 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 08 17:43:11 crc kubenswrapper[5118]: I1208 17:43:11.342312 5118 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:43:12 crc kubenswrapper[5118]: I1208 17:43:12.346067 5118 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:43:13 crc kubenswrapper[5118]: I1208 17:43:13.343578 5118 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:43:13 crc kubenswrapper[5118]: E1208 17:43:13.469831 5118 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 08 17:43:14 crc kubenswrapper[5118]: I1208 17:43:14.345815 5118 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:43:14 crc kubenswrapper[5118]: E1208 17:43:14.563615 5118 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Dec 08 17:43:14 crc kubenswrapper[5118]: I1208 17:43:14.563709 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:43:14 crc kubenswrapper[5118]: I1208 17:43:14.564994 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:43:14 crc kubenswrapper[5118]: I1208 17:43:14.565057 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:43:14 crc kubenswrapper[5118]: I1208 17:43:14.565078 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:43:14 crc kubenswrapper[5118]: I1208 17:43:14.565123 5118 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 08 17:43:14 crc kubenswrapper[5118]: E1208 17:43:14.580051 5118 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 08 17:43:15 crc kubenswrapper[5118]: I1208 17:43:15.342608 5118 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:43:15 crc kubenswrapper[5118]: I1208 17:43:15.638008 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:43:15 crc kubenswrapper[5118]: I1208 17:43:15.638295 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:43:15 crc kubenswrapper[5118]: I1208 17:43:15.639317 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:43:15 crc kubenswrapper[5118]: I1208 17:43:15.639384 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:43:15 crc kubenswrapper[5118]: I1208 17:43:15.639405 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:43:15 crc kubenswrapper[5118]: E1208 17:43:15.639997 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:43:15 crc kubenswrapper[5118]: I1208 17:43:15.640544 5118 scope.go:117] "RemoveContainer" containerID="a916deeb8c941c77bf09446a38a73372d11667759e035a3184d83062a800c06d" Dec 08 17:43:15 crc kubenswrapper[5118]: E1208 17:43:15.640935 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 08 17:43:15 crc kubenswrapper[5118]: E1208 17:43:15.649796 5118 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f4e62a6cafa8e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f4e62a6cafa8e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:44.564605582 +0000 UTC m=+21.465929676,LastTimestamp:2025-12-08 17:43:15.640834004 +0000 UTC m=+52.542158138,Count:6,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:43:16 crc kubenswrapper[5118]: I1208 17:43:16.344414 5118 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:43:16 crc kubenswrapper[5118]: I1208 17:43:16.933476 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 08 17:43:16 crc kubenswrapper[5118]: I1208 17:43:16.933754 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:43:16 crc kubenswrapper[5118]: I1208 17:43:16.934646 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:43:16 crc kubenswrapper[5118]: I1208 17:43:16.934713 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:43:16 crc kubenswrapper[5118]: I1208 17:43:16.934727 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:43:16 crc kubenswrapper[5118]: E1208 17:43:16.935092 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:43:17 crc kubenswrapper[5118]: I1208 17:43:17.342262 5118 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:43:18 crc kubenswrapper[5118]: I1208 17:43:18.341244 5118 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:43:19 crc kubenswrapper[5118]: I1208 17:43:19.346995 5118 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:43:20 crc kubenswrapper[5118]: I1208 17:43:20.344976 5118 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:43:21 crc kubenswrapper[5118]: I1208 17:43:21.341342 5118 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:43:21 crc kubenswrapper[5118]: E1208 17:43:21.572851 5118 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Dec 08 17:43:21 crc kubenswrapper[5118]: I1208 17:43:21.580959 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:43:21 crc kubenswrapper[5118]: I1208 17:43:21.582138 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:43:21 crc kubenswrapper[5118]: I1208 17:43:21.582222 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:43:21 crc kubenswrapper[5118]: I1208 17:43:21.582247 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:43:21 crc kubenswrapper[5118]: I1208 17:43:21.582293 5118 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 08 17:43:21 crc kubenswrapper[5118]: E1208 17:43:21.600511 5118 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 08 17:43:22 crc kubenswrapper[5118]: I1208 17:43:22.343731 5118 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:43:23 crc kubenswrapper[5118]: I1208 17:43:23.345157 5118 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:43:23 crc kubenswrapper[5118]: E1208 17:43:23.470205 5118 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 08 17:43:24 crc kubenswrapper[5118]: I1208 17:43:24.342269 5118 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:43:25 crc kubenswrapper[5118]: I1208 17:43:25.343285 5118 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:43:26 crc kubenswrapper[5118]: I1208 17:43:26.344654 5118 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:43:27 crc kubenswrapper[5118]: I1208 17:43:27.344509 5118 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:43:28 crc kubenswrapper[5118]: I1208 17:43:28.343175 5118 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:43:28 crc kubenswrapper[5118]: I1208 17:43:28.512426 5118 csr.go:274] "Certificate signing request is approved, waiting to be issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-dcwdm" Dec 08 17:43:28 crc kubenswrapper[5118]: I1208 17:43:28.522268 5118 csr.go:270] "Certificate signing request is issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-dcwdm" Dec 08 17:43:28 crc kubenswrapper[5118]: I1208 17:43:28.600923 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:43:28 crc kubenswrapper[5118]: I1208 17:43:28.602160 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:43:28 crc kubenswrapper[5118]: I1208 17:43:28.602196 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:43:28 crc kubenswrapper[5118]: I1208 17:43:28.602207 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:43:28 crc kubenswrapper[5118]: I1208 17:43:28.602226 5118 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 08 17:43:28 crc kubenswrapper[5118]: I1208 17:43:28.609403 5118 kubelet_node_status.go:127] "Node was previously registered" node="crc" Dec 08 17:43:28 crc kubenswrapper[5118]: I1208 17:43:28.609662 5118 kubelet_node_status.go:81] "Successfully registered node" node="crc" Dec 08 17:43:28 crc kubenswrapper[5118]: E1208 17:43:28.609686 5118 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Dec 08 17:43:28 crc kubenswrapper[5118]: I1208 17:43:28.612354 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:43:28 crc kubenswrapper[5118]: I1208 17:43:28.612380 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:43:28 crc kubenswrapper[5118]: I1208 17:43:28.612391 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:43:28 crc kubenswrapper[5118]: I1208 17:43:28.612409 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:43:28 crc kubenswrapper[5118]: I1208 17:43:28.612421 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:43:28Z","lastTransitionTime":"2025-12-08T17:43:28Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Dec 08 17:43:28 crc kubenswrapper[5118]: I1208 17:43:28.618655 5118 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Dec 08 17:43:28 crc kubenswrapper[5118]: E1208 17:43:28.624904 5118 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:43:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:43:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:43:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:43:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:28Z\\\",\\\"message\\\":\\\"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3b244703-86d1-4a74-bdbb-1446f2890ff6\\\",\\\"systemUUID\\\":\\\"32c1a977-c4dc-4b4f-b307-ff2a2f4e57f1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:43:28 crc kubenswrapper[5118]: I1208 17:43:28.630951 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:43:28 crc kubenswrapper[5118]: I1208 17:43:28.631009 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:43:28 crc kubenswrapper[5118]: I1208 17:43:28.631025 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:43:28 crc kubenswrapper[5118]: I1208 17:43:28.631048 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:43:28 crc kubenswrapper[5118]: I1208 17:43:28.631061 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:43:28Z","lastTransitionTime":"2025-12-08T17:43:28Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Dec 08 17:43:28 crc kubenswrapper[5118]: E1208 17:43:28.640967 5118 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:43:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:43:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:43:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:43:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:28Z\\\",\\\"message\\\":\\\"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3b244703-86d1-4a74-bdbb-1446f2890ff6\\\",\\\"systemUUID\\\":\\\"32c1a977-c4dc-4b4f-b307-ff2a2f4e57f1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:43:28 crc kubenswrapper[5118]: I1208 17:43:28.647719 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:43:28 crc kubenswrapper[5118]: I1208 17:43:28.647764 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:43:28 crc kubenswrapper[5118]: I1208 17:43:28.647774 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:43:28 crc kubenswrapper[5118]: I1208 17:43:28.647791 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:43:28 crc kubenswrapper[5118]: I1208 17:43:28.647801 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:43:28Z","lastTransitionTime":"2025-12-08T17:43:28Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Dec 08 17:43:28 crc kubenswrapper[5118]: E1208 17:43:28.658619 5118 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:43:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:43:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:43:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:43:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:28Z\\\",\\\"message\\\":\\\"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3b244703-86d1-4a74-bdbb-1446f2890ff6\\\",\\\"systemUUID\\\":\\\"32c1a977-c4dc-4b4f-b307-ff2a2f4e57f1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:43:28 crc kubenswrapper[5118]: I1208 17:43:28.665158 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:43:28 crc kubenswrapper[5118]: I1208 17:43:28.665194 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:43:28 crc kubenswrapper[5118]: I1208 17:43:28.665208 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:43:28 crc kubenswrapper[5118]: I1208 17:43:28.665226 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:43:28 crc kubenswrapper[5118]: I1208 17:43:28.665239 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:43:28Z","lastTransitionTime":"2025-12-08T17:43:28Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Dec 08 17:43:28 crc kubenswrapper[5118]: E1208 17:43:28.673389 5118 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:43:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:43:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:43:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:43:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:28Z\\\",\\\"message\\\":\\\"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3b244703-86d1-4a74-bdbb-1446f2890ff6\\\",\\\"systemUUID\\\":\\\"32c1a977-c4dc-4b4f-b307-ff2a2f4e57f1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:43:28 crc kubenswrapper[5118]: E1208 17:43:28.673553 5118 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Dec 08 17:43:28 crc kubenswrapper[5118]: E1208 17:43:28.673583 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:28 crc kubenswrapper[5118]: E1208 17:43:28.774063 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:28 crc kubenswrapper[5118]: E1208 17:43:28.875103 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:28 crc kubenswrapper[5118]: E1208 17:43:28.976236 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:29 crc kubenswrapper[5118]: E1208 17:43:29.077103 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:29 crc kubenswrapper[5118]: E1208 17:43:29.178956 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:29 crc kubenswrapper[5118]: I1208 17:43:29.255829 5118 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Dec 08 17:43:29 crc kubenswrapper[5118]: E1208 17:43:29.280095 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:29 crc kubenswrapper[5118]: E1208 17:43:29.380252 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:29 crc kubenswrapper[5118]: E1208 17:43:29.480912 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:29 crc kubenswrapper[5118]: I1208 17:43:29.523515 5118 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kube-apiserver-client-kubelet" expiration="2026-01-07 17:38:28 +0000 UTC" deadline="2026-01-03 05:24:29.455471269 +0000 UTC" Dec 08 17:43:29 crc kubenswrapper[5118]: I1208 17:43:29.523576 5118 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kube-apiserver-client-kubelet" sleep="611h40m59.931900407s" Dec 08 17:43:29 crc kubenswrapper[5118]: E1208 17:43:29.581546 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:29 crc kubenswrapper[5118]: E1208 17:43:29.681859 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:29 crc kubenswrapper[5118]: E1208 17:43:29.782208 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:29 crc kubenswrapper[5118]: E1208 17:43:29.882964 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:29 crc kubenswrapper[5118]: E1208 17:43:29.984136 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:30 crc kubenswrapper[5118]: E1208 17:43:30.084943 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:30 crc kubenswrapper[5118]: E1208 17:43:30.185953 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:30 crc kubenswrapper[5118]: E1208 17:43:30.286299 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:30 crc kubenswrapper[5118]: E1208 17:43:30.386797 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:30 crc kubenswrapper[5118]: I1208 17:43:30.426325 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:43:30 crc kubenswrapper[5118]: I1208 17:43:30.427356 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:43:30 crc kubenswrapper[5118]: I1208 17:43:30.427395 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:43:30 crc kubenswrapper[5118]: I1208 17:43:30.427407 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:43:30 crc kubenswrapper[5118]: E1208 17:43:30.427828 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:43:30 crc kubenswrapper[5118]: I1208 17:43:30.428102 5118 scope.go:117] "RemoveContainer" containerID="a916deeb8c941c77bf09446a38a73372d11667759e035a3184d83062a800c06d" Dec 08 17:43:30 crc kubenswrapper[5118]: E1208 17:43:30.486948 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:30 crc kubenswrapper[5118]: E1208 17:43:30.587937 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:30 crc kubenswrapper[5118]: E1208 17:43:30.688901 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:30 crc kubenswrapper[5118]: I1208 17:43:30.723333 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Dec 08 17:43:30 crc kubenswrapper[5118]: I1208 17:43:30.725612 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"09a55b9dd89de217aa828b7f964664fff12b69580598e02e122e83d05b141077"} Dec 08 17:43:30 crc kubenswrapper[5118]: I1208 17:43:30.725972 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:43:30 crc kubenswrapper[5118]: I1208 17:43:30.728091 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:43:30 crc kubenswrapper[5118]: I1208 17:43:30.728154 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:43:30 crc kubenswrapper[5118]: I1208 17:43:30.728165 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:43:30 crc kubenswrapper[5118]: E1208 17:43:30.728626 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:43:30 crc kubenswrapper[5118]: E1208 17:43:30.789200 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:30 crc kubenswrapper[5118]: E1208 17:43:30.890059 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:30 crc kubenswrapper[5118]: E1208 17:43:30.990442 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:31 crc kubenswrapper[5118]: E1208 17:43:31.090584 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:31 crc kubenswrapper[5118]: E1208 17:43:31.191400 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:31 crc kubenswrapper[5118]: E1208 17:43:31.292446 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:31 crc kubenswrapper[5118]: E1208 17:43:31.392756 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:31 crc kubenswrapper[5118]: E1208 17:43:31.493720 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:31 crc kubenswrapper[5118]: E1208 17:43:31.594717 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:31 crc kubenswrapper[5118]: E1208 17:43:31.695556 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:31 crc kubenswrapper[5118]: E1208 17:43:31.795653 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:31 crc kubenswrapper[5118]: E1208 17:43:31.896227 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:31 crc kubenswrapper[5118]: E1208 17:43:31.997193 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:32 crc kubenswrapper[5118]: E1208 17:43:32.098316 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:32 crc kubenswrapper[5118]: E1208 17:43:32.199016 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:32 crc kubenswrapper[5118]: E1208 17:43:32.300098 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:32 crc kubenswrapper[5118]: E1208 17:43:32.400318 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:32 crc kubenswrapper[5118]: E1208 17:43:32.501284 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:32 crc kubenswrapper[5118]: E1208 17:43:32.601782 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:32 crc kubenswrapper[5118]: E1208 17:43:32.702810 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:32 crc kubenswrapper[5118]: I1208 17:43:32.731154 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Dec 08 17:43:32 crc kubenswrapper[5118]: I1208 17:43:32.731801 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Dec 08 17:43:32 crc kubenswrapper[5118]: I1208 17:43:32.733236 5118 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="09a55b9dd89de217aa828b7f964664fff12b69580598e02e122e83d05b141077" exitCode=255 Dec 08 17:43:32 crc kubenswrapper[5118]: I1208 17:43:32.733298 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"09a55b9dd89de217aa828b7f964664fff12b69580598e02e122e83d05b141077"} Dec 08 17:43:32 crc kubenswrapper[5118]: I1208 17:43:32.733341 5118 scope.go:117] "RemoveContainer" containerID="a916deeb8c941c77bf09446a38a73372d11667759e035a3184d83062a800c06d" Dec 08 17:43:32 crc kubenswrapper[5118]: I1208 17:43:32.733482 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:43:32 crc kubenswrapper[5118]: I1208 17:43:32.734276 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:43:32 crc kubenswrapper[5118]: I1208 17:43:32.734322 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:43:32 crc kubenswrapper[5118]: I1208 17:43:32.734337 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:43:32 crc kubenswrapper[5118]: E1208 17:43:32.734829 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:43:32 crc kubenswrapper[5118]: I1208 17:43:32.735136 5118 scope.go:117] "RemoveContainer" containerID="09a55b9dd89de217aa828b7f964664fff12b69580598e02e122e83d05b141077" Dec 08 17:43:32 crc kubenswrapper[5118]: E1208 17:43:32.735378 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 08 17:43:32 crc kubenswrapper[5118]: E1208 17:43:32.803912 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:32 crc kubenswrapper[5118]: E1208 17:43:32.904157 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:33 crc kubenswrapper[5118]: E1208 17:43:33.005004 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:33 crc kubenswrapper[5118]: E1208 17:43:33.105746 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:33 crc kubenswrapper[5118]: E1208 17:43:33.206803 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:33 crc kubenswrapper[5118]: E1208 17:43:33.307324 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:33 crc kubenswrapper[5118]: E1208 17:43:33.407939 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:33 crc kubenswrapper[5118]: E1208 17:43:33.471013 5118 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 08 17:43:33 crc kubenswrapper[5118]: E1208 17:43:33.508060 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:33 crc kubenswrapper[5118]: E1208 17:43:33.608420 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:33 crc kubenswrapper[5118]: E1208 17:43:33.709388 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:33 crc kubenswrapper[5118]: I1208 17:43:33.736777 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Dec 08 17:43:33 crc kubenswrapper[5118]: E1208 17:43:33.809662 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:33 crc kubenswrapper[5118]: E1208 17:43:33.910755 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:34 crc kubenswrapper[5118]: E1208 17:43:34.011577 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:34 crc kubenswrapper[5118]: E1208 17:43:34.112742 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:34 crc kubenswrapper[5118]: E1208 17:43:34.213344 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:34 crc kubenswrapper[5118]: E1208 17:43:34.314243 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:34 crc kubenswrapper[5118]: E1208 17:43:34.415286 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:34 crc kubenswrapper[5118]: E1208 17:43:34.516210 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:34 crc kubenswrapper[5118]: E1208 17:43:34.616478 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:34 crc kubenswrapper[5118]: E1208 17:43:34.716631 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:34 crc kubenswrapper[5118]: E1208 17:43:34.817703 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:34 crc kubenswrapper[5118]: E1208 17:43:34.918204 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:35 crc kubenswrapper[5118]: E1208 17:43:35.019239 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:35 crc kubenswrapper[5118]: E1208 17:43:35.120070 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:35 crc kubenswrapper[5118]: E1208 17:43:35.220340 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:35 crc kubenswrapper[5118]: I1208 17:43:35.316916 5118 reflector.go:430] "Caches populated" type="*v1.CSIDriver" reflector="k8s.io/client-go/informers/factory.go:160" Dec 08 17:43:35 crc kubenswrapper[5118]: E1208 17:43:35.320682 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:35 crc kubenswrapper[5118]: E1208 17:43:35.420807 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:35 crc kubenswrapper[5118]: E1208 17:43:35.521261 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:35 crc kubenswrapper[5118]: E1208 17:43:35.621955 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:35 crc kubenswrapper[5118]: E1208 17:43:35.742267 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:35 crc kubenswrapper[5118]: E1208 17:43:35.843411 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:35 crc kubenswrapper[5118]: E1208 17:43:35.943646 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:36 crc kubenswrapper[5118]: E1208 17:43:36.043943 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:36 crc kubenswrapper[5118]: E1208 17:43:36.144340 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:36 crc kubenswrapper[5118]: E1208 17:43:36.245202 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:36 crc kubenswrapper[5118]: E1208 17:43:36.345706 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:36 crc kubenswrapper[5118]: E1208 17:43:36.446833 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:36 crc kubenswrapper[5118]: E1208 17:43:36.547091 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:36 crc kubenswrapper[5118]: E1208 17:43:36.648186 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:36 crc kubenswrapper[5118]: E1208 17:43:36.748675 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:36 crc kubenswrapper[5118]: I1208 17:43:36.797967 5118 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:43:36 crc kubenswrapper[5118]: I1208 17:43:36.798389 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:43:36 crc kubenswrapper[5118]: I1208 17:43:36.799764 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:43:36 crc kubenswrapper[5118]: I1208 17:43:36.799835 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:43:36 crc kubenswrapper[5118]: I1208 17:43:36.799862 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:43:36 crc kubenswrapper[5118]: E1208 17:43:36.800562 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:43:36 crc kubenswrapper[5118]: I1208 17:43:36.801017 5118 scope.go:117] "RemoveContainer" containerID="09a55b9dd89de217aa828b7f964664fff12b69580598e02e122e83d05b141077" Dec 08 17:43:36 crc kubenswrapper[5118]: E1208 17:43:36.801363 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 08 17:43:36 crc kubenswrapper[5118]: E1208 17:43:36.849596 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:36 crc kubenswrapper[5118]: E1208 17:43:36.950420 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:37 crc kubenswrapper[5118]: E1208 17:43:37.051285 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:37 crc kubenswrapper[5118]: E1208 17:43:37.151714 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:37 crc kubenswrapper[5118]: E1208 17:43:37.251836 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:37 crc kubenswrapper[5118]: E1208 17:43:37.352971 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:37 crc kubenswrapper[5118]: E1208 17:43:37.453335 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:37 crc kubenswrapper[5118]: E1208 17:43:37.553663 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:37 crc kubenswrapper[5118]: E1208 17:43:37.653834 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:37 crc kubenswrapper[5118]: E1208 17:43:37.754810 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:37 crc kubenswrapper[5118]: E1208 17:43:37.855437 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:37 crc kubenswrapper[5118]: E1208 17:43:37.956399 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:38 crc kubenswrapper[5118]: E1208 17:43:38.056512 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:38 crc kubenswrapper[5118]: E1208 17:43:38.157739 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:38 crc kubenswrapper[5118]: E1208 17:43:38.257966 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:38 crc kubenswrapper[5118]: E1208 17:43:38.358399 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:38 crc kubenswrapper[5118]: E1208 17:43:38.458795 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:38 crc kubenswrapper[5118]: E1208 17:43:38.560154 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:38 crc kubenswrapper[5118]: E1208 17:43:38.661636 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:38 crc kubenswrapper[5118]: E1208 17:43:38.762953 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:38 crc kubenswrapper[5118]: E1208 17:43:38.863952 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:38 crc kubenswrapper[5118]: E1208 17:43:38.964495 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:38 crc kubenswrapper[5118]: E1208 17:43:38.985945 5118 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Dec 08 17:43:38 crc kubenswrapper[5118]: I1208 17:43:38.989854 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:43:38 crc kubenswrapper[5118]: I1208 17:43:38.990007 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:43:38 crc kubenswrapper[5118]: I1208 17:43:38.990080 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:43:38 crc kubenswrapper[5118]: I1208 17:43:38.990152 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:43:38 crc kubenswrapper[5118]: I1208 17:43:38.990212 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:43:38Z","lastTransitionTime":"2025-12-08T17:43:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:43:38 crc kubenswrapper[5118]: E1208 17:43:38.999856 5118 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:43:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:43:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:43:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:43:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3b244703-86d1-4a74-bdbb-1446f2890ff6\\\",\\\"systemUUID\\\":\\\"32c1a977-c4dc-4b4f-b307-ff2a2f4e57f1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:43:39 crc kubenswrapper[5118]: I1208 17:43:39.004255 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:43:39 crc kubenswrapper[5118]: I1208 17:43:39.004390 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:43:39 crc kubenswrapper[5118]: I1208 17:43:39.004508 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:43:39 crc kubenswrapper[5118]: I1208 17:43:39.004613 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:43:39 crc kubenswrapper[5118]: I1208 17:43:39.004720 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:43:39Z","lastTransitionTime":"2025-12-08T17:43:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:43:39 crc kubenswrapper[5118]: E1208 17:43:39.015263 5118 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:43:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:43:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:43:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:43:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3b244703-86d1-4a74-bdbb-1446f2890ff6\\\",\\\"systemUUID\\\":\\\"32c1a977-c4dc-4b4f-b307-ff2a2f4e57f1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:43:39 crc kubenswrapper[5118]: I1208 17:43:39.019190 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:43:39 crc kubenswrapper[5118]: I1208 17:43:39.019399 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:43:39 crc kubenswrapper[5118]: I1208 17:43:39.019600 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:43:39 crc kubenswrapper[5118]: I1208 17:43:39.019707 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:43:39 crc kubenswrapper[5118]: I1208 17:43:39.019790 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:43:39Z","lastTransitionTime":"2025-12-08T17:43:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:43:39 crc kubenswrapper[5118]: E1208 17:43:39.028377 5118 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:43:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:43:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:43:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:43:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3b244703-86d1-4a74-bdbb-1446f2890ff6\\\",\\\"systemUUID\\\":\\\"32c1a977-c4dc-4b4f-b307-ff2a2f4e57f1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:43:39 crc kubenswrapper[5118]: I1208 17:43:39.032068 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:43:39 crc kubenswrapper[5118]: I1208 17:43:39.032141 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:43:39 crc kubenswrapper[5118]: I1208 17:43:39.032161 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:43:39 crc kubenswrapper[5118]: I1208 17:43:39.032189 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:43:39 crc kubenswrapper[5118]: I1208 17:43:39.032209 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:43:39Z","lastTransitionTime":"2025-12-08T17:43:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:43:39 crc kubenswrapper[5118]: E1208 17:43:39.046927 5118 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:43:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:43:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:43:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:43:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3b244703-86d1-4a74-bdbb-1446f2890ff6\\\",\\\"systemUUID\\\":\\\"32c1a977-c4dc-4b4f-b307-ff2a2f4e57f1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:43:39 crc kubenswrapper[5118]: E1208 17:43:39.047619 5118 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Dec 08 17:43:39 crc kubenswrapper[5118]: E1208 17:43:39.065058 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:39 crc kubenswrapper[5118]: E1208 17:43:39.165164 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:39 crc kubenswrapper[5118]: E1208 17:43:39.266270 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:39 crc kubenswrapper[5118]: E1208 17:43:39.367225 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:39 crc kubenswrapper[5118]: E1208 17:43:39.467518 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:39 crc kubenswrapper[5118]: E1208 17:43:39.568199 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:39 crc kubenswrapper[5118]: E1208 17:43:39.668513 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:39 crc kubenswrapper[5118]: E1208 17:43:39.768931 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:39 crc kubenswrapper[5118]: E1208 17:43:39.869179 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:39 crc kubenswrapper[5118]: I1208 17:43:39.913332 5118 reflector.go:430] "Caches populated" type="*v1.Service" reflector="k8s.io/client-go/informers/factory.go:160" Dec 08 17:43:39 crc kubenswrapper[5118]: E1208 17:43:39.970639 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:40 crc kubenswrapper[5118]: E1208 17:43:40.071736 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:40 crc kubenswrapper[5118]: E1208 17:43:40.172827 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:40 crc kubenswrapper[5118]: E1208 17:43:40.273233 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:40 crc kubenswrapper[5118]: E1208 17:43:40.373917 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:40 crc kubenswrapper[5118]: E1208 17:43:40.475301 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:40 crc kubenswrapper[5118]: E1208 17:43:40.576360 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:40 crc kubenswrapper[5118]: E1208 17:43:40.677002 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:40 crc kubenswrapper[5118]: I1208 17:43:40.726803 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:43:40 crc kubenswrapper[5118]: I1208 17:43:40.727060 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:43:40 crc kubenswrapper[5118]: I1208 17:43:40.727828 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:43:40 crc kubenswrapper[5118]: I1208 17:43:40.727852 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:43:40 crc kubenswrapper[5118]: I1208 17:43:40.727861 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:43:40 crc kubenswrapper[5118]: E1208 17:43:40.728237 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:43:40 crc kubenswrapper[5118]: I1208 17:43:40.728465 5118 scope.go:117] "RemoveContainer" containerID="09a55b9dd89de217aa828b7f964664fff12b69580598e02e122e83d05b141077" Dec 08 17:43:40 crc kubenswrapper[5118]: E1208 17:43:40.728641 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 08 17:43:40 crc kubenswrapper[5118]: E1208 17:43:40.778112 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:40 crc kubenswrapper[5118]: E1208 17:43:40.878970 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:40 crc kubenswrapper[5118]: E1208 17:43:40.979992 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:41 crc kubenswrapper[5118]: E1208 17:43:41.080144 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:41 crc kubenswrapper[5118]: E1208 17:43:41.180774 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:41 crc kubenswrapper[5118]: E1208 17:43:41.282066 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:41 crc kubenswrapper[5118]: E1208 17:43:41.382775 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:41 crc kubenswrapper[5118]: I1208 17:43:41.426778 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:43:41 crc kubenswrapper[5118]: I1208 17:43:41.427673 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:43:41 crc kubenswrapper[5118]: I1208 17:43:41.427704 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:43:41 crc kubenswrapper[5118]: I1208 17:43:41.427714 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:43:41 crc kubenswrapper[5118]: E1208 17:43:41.428007 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:43:41 crc kubenswrapper[5118]: E1208 17:43:41.483844 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:41 crc kubenswrapper[5118]: E1208 17:43:41.585761 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:41 crc kubenswrapper[5118]: E1208 17:43:41.687179 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:41 crc kubenswrapper[5118]: E1208 17:43:41.787997 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:41 crc kubenswrapper[5118]: E1208 17:43:41.888944 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:41 crc kubenswrapper[5118]: E1208 17:43:41.989480 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:42 crc kubenswrapper[5118]: E1208 17:43:42.089670 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:42 crc kubenswrapper[5118]: E1208 17:43:42.190139 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:42 crc kubenswrapper[5118]: E1208 17:43:42.290523 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:42 crc kubenswrapper[5118]: E1208 17:43:42.391050 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:42 crc kubenswrapper[5118]: E1208 17:43:42.491774 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:42 crc kubenswrapper[5118]: E1208 17:43:42.592381 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:42 crc kubenswrapper[5118]: E1208 17:43:42.693626 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:42 crc kubenswrapper[5118]: E1208 17:43:42.795078 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:42 crc kubenswrapper[5118]: E1208 17:43:42.896184 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:42 crc kubenswrapper[5118]: E1208 17:43:42.996561 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:43 crc kubenswrapper[5118]: E1208 17:43:43.096690 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:43 crc kubenswrapper[5118]: E1208 17:43:43.197446 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:43 crc kubenswrapper[5118]: E1208 17:43:43.298431 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:43 crc kubenswrapper[5118]: E1208 17:43:43.399553 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:43 crc kubenswrapper[5118]: E1208 17:43:43.472326 5118 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 08 17:43:43 crc kubenswrapper[5118]: E1208 17:43:43.500309 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:43 crc kubenswrapper[5118]: E1208 17:43:43.600765 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:43 crc kubenswrapper[5118]: E1208 17:43:43.701853 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:43 crc kubenswrapper[5118]: E1208 17:43:43.803108 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:43 crc kubenswrapper[5118]: E1208 17:43:43.903955 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:44 crc kubenswrapper[5118]: E1208 17:43:44.005230 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:44 crc kubenswrapper[5118]: E1208 17:43:44.105731 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:44 crc kubenswrapper[5118]: E1208 17:43:44.206535 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:44 crc kubenswrapper[5118]: E1208 17:43:44.307610 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:44 crc kubenswrapper[5118]: E1208 17:43:44.408399 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:44 crc kubenswrapper[5118]: E1208 17:43:44.509143 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:44 crc kubenswrapper[5118]: E1208 17:43:44.610894 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:44 crc kubenswrapper[5118]: E1208 17:43:44.711950 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:44 crc kubenswrapper[5118]: I1208 17:43:44.794182 5118 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Dec 08 17:43:44 crc kubenswrapper[5118]: E1208 17:43:44.812703 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:44 crc kubenswrapper[5118]: E1208 17:43:44.913428 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:45 crc kubenswrapper[5118]: E1208 17:43:45.014507 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:45 crc kubenswrapper[5118]: E1208 17:43:45.115670 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:45 crc kubenswrapper[5118]: E1208 17:43:45.216606 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:45 crc kubenswrapper[5118]: E1208 17:43:45.318052 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:45 crc kubenswrapper[5118]: E1208 17:43:45.418580 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:45 crc kubenswrapper[5118]: E1208 17:43:45.519530 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:45 crc kubenswrapper[5118]: E1208 17:43:45.621214 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:45 crc kubenswrapper[5118]: E1208 17:43:45.721736 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:45 crc kubenswrapper[5118]: E1208 17:43:45.822088 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:45 crc kubenswrapper[5118]: E1208 17:43:45.922829 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:46 crc kubenswrapper[5118]: E1208 17:43:46.023839 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:46 crc kubenswrapper[5118]: E1208 17:43:46.125026 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:46 crc kubenswrapper[5118]: E1208 17:43:46.226131 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:46 crc kubenswrapper[5118]: E1208 17:43:46.326656 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:46 crc kubenswrapper[5118]: E1208 17:43:46.427636 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:46 crc kubenswrapper[5118]: E1208 17:43:46.528660 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:46 crc kubenswrapper[5118]: E1208 17:43:46.629366 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:46 crc kubenswrapper[5118]: E1208 17:43:46.730366 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:46 crc kubenswrapper[5118]: E1208 17:43:46.831211 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:46 crc kubenswrapper[5118]: E1208 17:43:46.932019 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:47 crc kubenswrapper[5118]: E1208 17:43:47.033164 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:47 crc kubenswrapper[5118]: E1208 17:43:47.134034 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:47 crc kubenswrapper[5118]: E1208 17:43:47.235114 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:47 crc kubenswrapper[5118]: E1208 17:43:47.335944 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:47 crc kubenswrapper[5118]: E1208 17:43:47.436283 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:47 crc kubenswrapper[5118]: E1208 17:43:47.536485 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:47 crc kubenswrapper[5118]: E1208 17:43:47.637330 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:47 crc kubenswrapper[5118]: E1208 17:43:47.738562 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:47 crc kubenswrapper[5118]: E1208 17:43:47.838721 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:47 crc kubenswrapper[5118]: E1208 17:43:47.939479 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:48 crc kubenswrapper[5118]: E1208 17:43:48.040589 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:48 crc kubenswrapper[5118]: E1208 17:43:48.140787 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:48 crc kubenswrapper[5118]: E1208 17:43:48.241795 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:48 crc kubenswrapper[5118]: E1208 17:43:48.342244 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:48 crc kubenswrapper[5118]: E1208 17:43:48.443116 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:48 crc kubenswrapper[5118]: E1208 17:43:48.543281 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:48 crc kubenswrapper[5118]: E1208 17:43:48.644266 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:48 crc kubenswrapper[5118]: E1208 17:43:48.745581 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:48 crc kubenswrapper[5118]: E1208 17:43:48.846052 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:48 crc kubenswrapper[5118]: E1208 17:43:48.947419 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:49 crc kubenswrapper[5118]: E1208 17:43:49.048061 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:49 crc kubenswrapper[5118]: E1208 17:43:49.148940 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:49 crc kubenswrapper[5118]: E1208 17:43:49.249952 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:49 crc kubenswrapper[5118]: E1208 17:43:49.350912 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:49 crc kubenswrapper[5118]: E1208 17:43:49.425077 5118 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Dec 08 17:43:49 crc kubenswrapper[5118]: I1208 17:43:49.427100 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:43:49 crc kubenswrapper[5118]: I1208 17:43:49.428357 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:43:49 crc kubenswrapper[5118]: I1208 17:43:49.428599 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:43:49 crc kubenswrapper[5118]: I1208 17:43:49.428759 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:43:49 crc kubenswrapper[5118]: I1208 17:43:49.430700 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:43:49 crc kubenswrapper[5118]: I1208 17:43:49.430948 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:43:49 crc kubenswrapper[5118]: I1208 17:43:49.431100 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:43:49 crc kubenswrapper[5118]: I1208 17:43:49.431251 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:43:49 crc kubenswrapper[5118]: I1208 17:43:49.431386 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:43:49Z","lastTransitionTime":"2025-12-08T17:43:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:43:49 crc kubenswrapper[5118]: E1208 17:43:49.431676 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:43:49 crc kubenswrapper[5118]: E1208 17:43:49.448479 5118 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:43:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:43:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:43:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:43:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3b244703-86d1-4a74-bdbb-1446f2890ff6\\\",\\\"systemUUID\\\":\\\"32c1a977-c4dc-4b4f-b307-ff2a2f4e57f1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:43:49 crc kubenswrapper[5118]: I1208 17:43:49.452722 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:43:49 crc kubenswrapper[5118]: I1208 17:43:49.452813 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:43:49 crc kubenswrapper[5118]: I1208 17:43:49.452843 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:43:49 crc kubenswrapper[5118]: I1208 17:43:49.452919 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:43:49 crc kubenswrapper[5118]: I1208 17:43:49.452954 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:43:49Z","lastTransitionTime":"2025-12-08T17:43:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:43:49 crc kubenswrapper[5118]: E1208 17:43:49.468947 5118 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:43:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:43:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:43:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:43:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3b244703-86d1-4a74-bdbb-1446f2890ff6\\\",\\\"systemUUID\\\":\\\"32c1a977-c4dc-4b4f-b307-ff2a2f4e57f1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:43:49 crc kubenswrapper[5118]: I1208 17:43:49.473457 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:43:49 crc kubenswrapper[5118]: I1208 17:43:49.473500 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:43:49 crc kubenswrapper[5118]: I1208 17:43:49.473517 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:43:49 crc kubenswrapper[5118]: I1208 17:43:49.473534 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:43:49 crc kubenswrapper[5118]: I1208 17:43:49.473546 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:43:49Z","lastTransitionTime":"2025-12-08T17:43:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:43:49 crc kubenswrapper[5118]: E1208 17:43:49.486293 5118 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:43:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:43:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:43:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:43:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3b244703-86d1-4a74-bdbb-1446f2890ff6\\\",\\\"systemUUID\\\":\\\"32c1a977-c4dc-4b4f-b307-ff2a2f4e57f1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:43:49 crc kubenswrapper[5118]: I1208 17:43:49.490252 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:43:49 crc kubenswrapper[5118]: I1208 17:43:49.490291 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:43:49 crc kubenswrapper[5118]: I1208 17:43:49.490300 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:43:49 crc kubenswrapper[5118]: I1208 17:43:49.490312 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:43:49 crc kubenswrapper[5118]: I1208 17:43:49.490322 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:43:49Z","lastTransitionTime":"2025-12-08T17:43:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:43:49 crc kubenswrapper[5118]: E1208 17:43:49.504055 5118 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:43:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:43:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:43:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:43:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3b244703-86d1-4a74-bdbb-1446f2890ff6\\\",\\\"systemUUID\\\":\\\"32c1a977-c4dc-4b4f-b307-ff2a2f4e57f1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:43:49 crc kubenswrapper[5118]: E1208 17:43:49.504305 5118 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Dec 08 17:43:49 crc kubenswrapper[5118]: E1208 17:43:49.504348 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:49 crc kubenswrapper[5118]: E1208 17:43:49.605503 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:49 crc kubenswrapper[5118]: E1208 17:43:49.706403 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:49 crc kubenswrapper[5118]: E1208 17:43:49.806865 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:49 crc kubenswrapper[5118]: E1208 17:43:49.907169 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:50 crc kubenswrapper[5118]: E1208 17:43:50.007724 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:50 crc kubenswrapper[5118]: E1208 17:43:50.108444 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:50 crc kubenswrapper[5118]: E1208 17:43:50.208716 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:50 crc kubenswrapper[5118]: E1208 17:43:50.309235 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:50 crc kubenswrapper[5118]: E1208 17:43:50.410097 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:50 crc kubenswrapper[5118]: E1208 17:43:50.510640 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:50 crc kubenswrapper[5118]: E1208 17:43:50.610796 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:50 crc kubenswrapper[5118]: E1208 17:43:50.711698 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:50 crc kubenswrapper[5118]: E1208 17:43:50.813169 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:50 crc kubenswrapper[5118]: E1208 17:43:50.913985 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:51 crc kubenswrapper[5118]: E1208 17:43:51.014341 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:51 crc kubenswrapper[5118]: E1208 17:43:51.114864 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:51 crc kubenswrapper[5118]: E1208 17:43:51.215762 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:51 crc kubenswrapper[5118]: E1208 17:43:51.316230 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:51 crc kubenswrapper[5118]: E1208 17:43:51.417013 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:51 crc kubenswrapper[5118]: I1208 17:43:51.426450 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:43:51 crc kubenswrapper[5118]: I1208 17:43:51.427410 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:43:51 crc kubenswrapper[5118]: I1208 17:43:51.427458 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:43:51 crc kubenswrapper[5118]: I1208 17:43:51.427472 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:43:51 crc kubenswrapper[5118]: E1208 17:43:51.427981 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:43:51 crc kubenswrapper[5118]: I1208 17:43:51.428241 5118 scope.go:117] "RemoveContainer" containerID="09a55b9dd89de217aa828b7f964664fff12b69580598e02e122e83d05b141077" Dec 08 17:43:51 crc kubenswrapper[5118]: E1208 17:43:51.428470 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 08 17:43:51 crc kubenswrapper[5118]: E1208 17:43:51.517848 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:51 crc kubenswrapper[5118]: E1208 17:43:51.617990 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:51 crc kubenswrapper[5118]: E1208 17:43:51.718341 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:51 crc kubenswrapper[5118]: E1208 17:43:51.818792 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:51 crc kubenswrapper[5118]: E1208 17:43:51.919268 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:52 crc kubenswrapper[5118]: E1208 17:43:52.019458 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:52 crc kubenswrapper[5118]: E1208 17:43:52.119580 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:52 crc kubenswrapper[5118]: E1208 17:43:52.220187 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:52 crc kubenswrapper[5118]: E1208 17:43:52.320767 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:52 crc kubenswrapper[5118]: E1208 17:43:52.421629 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:52 crc kubenswrapper[5118]: E1208 17:43:52.522309 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:52 crc kubenswrapper[5118]: E1208 17:43:52.623478 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:52 crc kubenswrapper[5118]: E1208 17:43:52.724006 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:52 crc kubenswrapper[5118]: E1208 17:43:52.824681 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:52 crc kubenswrapper[5118]: E1208 17:43:52.925301 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:53 crc kubenswrapper[5118]: E1208 17:43:53.026394 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:53 crc kubenswrapper[5118]: E1208 17:43:53.127581 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:53 crc kubenswrapper[5118]: E1208 17:43:53.228210 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:53 crc kubenswrapper[5118]: E1208 17:43:53.329405 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:53 crc kubenswrapper[5118]: E1208 17:43:53.430682 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:53 crc kubenswrapper[5118]: E1208 17:43:53.473692 5118 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 08 17:43:53 crc kubenswrapper[5118]: E1208 17:43:53.531912 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:53 crc kubenswrapper[5118]: E1208 17:43:53.632670 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:53 crc kubenswrapper[5118]: E1208 17:43:53.734682 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:53 crc kubenswrapper[5118]: E1208 17:43:53.835327 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:53 crc kubenswrapper[5118]: E1208 17:43:53.935993 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:54 crc kubenswrapper[5118]: E1208 17:43:54.037181 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:54 crc kubenswrapper[5118]: E1208 17:43:54.137451 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:54 crc kubenswrapper[5118]: E1208 17:43:54.238583 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:54 crc kubenswrapper[5118]: E1208 17:43:54.338985 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:54 crc kubenswrapper[5118]: E1208 17:43:54.439307 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:54 crc kubenswrapper[5118]: E1208 17:43:54.540244 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:54 crc kubenswrapper[5118]: E1208 17:43:54.640865 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:54 crc kubenswrapper[5118]: E1208 17:43:54.741387 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:54 crc kubenswrapper[5118]: E1208 17:43:54.842537 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:54 crc kubenswrapper[5118]: E1208 17:43:54.942853 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:55 crc kubenswrapper[5118]: E1208 17:43:55.044074 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:55 crc kubenswrapper[5118]: E1208 17:43:55.145155 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:55 crc kubenswrapper[5118]: E1208 17:43:55.245382 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:55 crc kubenswrapper[5118]: E1208 17:43:55.346404 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:55 crc kubenswrapper[5118]: E1208 17:43:55.446975 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:55 crc kubenswrapper[5118]: E1208 17:43:55.547167 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:55 crc kubenswrapper[5118]: E1208 17:43:55.647774 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:55 crc kubenswrapper[5118]: E1208 17:43:55.748634 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:55 crc kubenswrapper[5118]: E1208 17:43:55.849257 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:55 crc kubenswrapper[5118]: E1208 17:43:55.950113 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:56 crc kubenswrapper[5118]: E1208 17:43:56.050427 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:56 crc kubenswrapper[5118]: E1208 17:43:56.151262 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:56 crc kubenswrapper[5118]: E1208 17:43:56.252454 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:56 crc kubenswrapper[5118]: E1208 17:43:56.353470 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:56 crc kubenswrapper[5118]: E1208 17:43:56.454107 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:56 crc kubenswrapper[5118]: E1208 17:43:56.554701 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:56 crc kubenswrapper[5118]: E1208 17:43:56.655838 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:56 crc kubenswrapper[5118]: E1208 17:43:56.756011 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:56 crc kubenswrapper[5118]: E1208 17:43:56.856397 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:56 crc kubenswrapper[5118]: E1208 17:43:56.956670 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:57 crc kubenswrapper[5118]: E1208 17:43:57.057833 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:57 crc kubenswrapper[5118]: E1208 17:43:57.158763 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:57 crc kubenswrapper[5118]: E1208 17:43:57.259208 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:57 crc kubenswrapper[5118]: E1208 17:43:57.359538 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:57 crc kubenswrapper[5118]: I1208 17:43:57.426401 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:43:57 crc kubenswrapper[5118]: I1208 17:43:57.428258 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:43:57 crc kubenswrapper[5118]: I1208 17:43:57.428341 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:43:57 crc kubenswrapper[5118]: I1208 17:43:57.428360 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:43:57 crc kubenswrapper[5118]: E1208 17:43:57.429052 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:43:57 crc kubenswrapper[5118]: E1208 17:43:57.460732 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:57 crc kubenswrapper[5118]: E1208 17:43:57.561430 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:57 crc kubenswrapper[5118]: E1208 17:43:57.662207 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:57 crc kubenswrapper[5118]: E1208 17:43:57.762572 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:57 crc kubenswrapper[5118]: E1208 17:43:57.863403 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:57 crc kubenswrapper[5118]: E1208 17:43:57.964672 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:58 crc kubenswrapper[5118]: E1208 17:43:58.065865 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:58 crc kubenswrapper[5118]: E1208 17:43:58.166869 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:58 crc kubenswrapper[5118]: E1208 17:43:58.267766 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:58 crc kubenswrapper[5118]: E1208 17:43:58.368791 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:58 crc kubenswrapper[5118]: E1208 17:43:58.469102 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:58 crc kubenswrapper[5118]: E1208 17:43:58.570228 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:58 crc kubenswrapper[5118]: E1208 17:43:58.670982 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:58 crc kubenswrapper[5118]: E1208 17:43:58.771129 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:58 crc kubenswrapper[5118]: E1208 17:43:58.872013 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:58 crc kubenswrapper[5118]: E1208 17:43:58.972134 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:59 crc kubenswrapper[5118]: E1208 17:43:59.073013 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:59 crc kubenswrapper[5118]: E1208 17:43:59.173509 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:59 crc kubenswrapper[5118]: E1208 17:43:59.274371 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:59 crc kubenswrapper[5118]: E1208 17:43:59.375488 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:59 crc kubenswrapper[5118]: E1208 17:43:59.476725 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:59 crc kubenswrapper[5118]: E1208 17:43:59.577477 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:59 crc kubenswrapper[5118]: E1208 17:43:59.670196 5118 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Dec 08 17:43:59 crc kubenswrapper[5118]: I1208 17:43:59.673868 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:43:59 crc kubenswrapper[5118]: I1208 17:43:59.673938 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:43:59 crc kubenswrapper[5118]: I1208 17:43:59.673955 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:43:59 crc kubenswrapper[5118]: I1208 17:43:59.673973 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:43:59 crc kubenswrapper[5118]: I1208 17:43:59.673985 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:43:59Z","lastTransitionTime":"2025-12-08T17:43:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:43:59 crc kubenswrapper[5118]: E1208 17:43:59.687077 5118 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:43:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:43:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:43:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:43:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3b244703-86d1-4a74-bdbb-1446f2890ff6\\\",\\\"systemUUID\\\":\\\"32c1a977-c4dc-4b4f-b307-ff2a2f4e57f1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:43:59 crc kubenswrapper[5118]: I1208 17:43:59.691202 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:43:59 crc kubenswrapper[5118]: I1208 17:43:59.691253 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:43:59 crc kubenswrapper[5118]: I1208 17:43:59.691265 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:43:59 crc kubenswrapper[5118]: I1208 17:43:59.691285 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:43:59 crc kubenswrapper[5118]: I1208 17:43:59.691298 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:43:59Z","lastTransitionTime":"2025-12-08T17:43:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:43:59 crc kubenswrapper[5118]: E1208 17:43:59.706354 5118 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:43:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:43:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:43:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:43:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3b244703-86d1-4a74-bdbb-1446f2890ff6\\\",\\\"systemUUID\\\":\\\"32c1a977-c4dc-4b4f-b307-ff2a2f4e57f1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:43:59 crc kubenswrapper[5118]: I1208 17:43:59.710766 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:43:59 crc kubenswrapper[5118]: I1208 17:43:59.710812 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:43:59 crc kubenswrapper[5118]: I1208 17:43:59.710823 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:43:59 crc kubenswrapper[5118]: I1208 17:43:59.710841 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:43:59 crc kubenswrapper[5118]: I1208 17:43:59.710852 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:43:59Z","lastTransitionTime":"2025-12-08T17:43:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:43:59 crc kubenswrapper[5118]: E1208 17:43:59.725107 5118 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:43:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:43:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:43:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:43:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3b244703-86d1-4a74-bdbb-1446f2890ff6\\\",\\\"systemUUID\\\":\\\"32c1a977-c4dc-4b4f-b307-ff2a2f4e57f1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:43:59 crc kubenswrapper[5118]: I1208 17:43:59.728724 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:43:59 crc kubenswrapper[5118]: I1208 17:43:59.728766 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:43:59 crc kubenswrapper[5118]: I1208 17:43:59.728777 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:43:59 crc kubenswrapper[5118]: I1208 17:43:59.728792 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:43:59 crc kubenswrapper[5118]: I1208 17:43:59.728803 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:43:59Z","lastTransitionTime":"2025-12-08T17:43:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:43:59 crc kubenswrapper[5118]: E1208 17:43:59.739938 5118 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:43:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:43:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:43:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:43:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3b244703-86d1-4a74-bdbb-1446f2890ff6\\\",\\\"systemUUID\\\":\\\"32c1a977-c4dc-4b4f-b307-ff2a2f4e57f1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:43:59 crc kubenswrapper[5118]: E1208 17:43:59.740191 5118 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Dec 08 17:43:59 crc kubenswrapper[5118]: E1208 17:43:59.740235 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:59 crc kubenswrapper[5118]: E1208 17:43:59.840973 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:59 crc kubenswrapper[5118]: E1208 17:43:59.941851 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:44:00 crc kubenswrapper[5118]: E1208 17:44:00.042195 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:44:00 crc kubenswrapper[5118]: E1208 17:44:00.143263 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:44:00 crc kubenswrapper[5118]: E1208 17:44:00.243939 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:44:00 crc kubenswrapper[5118]: E1208 17:44:00.345091 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:44:00 crc kubenswrapper[5118]: E1208 17:44:00.446241 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:44:00 crc kubenswrapper[5118]: E1208 17:44:00.547231 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:44:00 crc kubenswrapper[5118]: E1208 17:44:00.648241 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:44:00 crc kubenswrapper[5118]: E1208 17:44:00.749157 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:44:00 crc kubenswrapper[5118]: E1208 17:44:00.849977 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:44:00 crc kubenswrapper[5118]: E1208 17:44:00.950758 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:44:01 crc kubenswrapper[5118]: E1208 17:44:01.051625 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:44:01 crc kubenswrapper[5118]: E1208 17:44:01.152619 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:44:01 crc kubenswrapper[5118]: E1208 17:44:01.253347 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.338294 5118 reflector.go:430] "Caches populated" type="*v1.Node" reflector="k8s.io/client-go/informers/factory.go:160" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.355556 5118 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.356139 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.356209 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.356226 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.356253 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.356269 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:44:01Z","lastTransitionTime":"2025-12-08T17:44:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.365143 5118 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.458574 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.458663 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.458694 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.458726 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.458747 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:44:01Z","lastTransitionTime":"2025-12-08T17:44:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.467832 5118 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.561640 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.561682 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.561694 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.561739 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.561752 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:44:01Z","lastTransitionTime":"2025-12-08T17:44:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.566585 5118 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.574639 5118 apiserver.go:52] "Watching apiserver" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.581742 5118 reflector.go:430] "Caches populated" type="*v1.Pod" reflector="pkg/kubelet/config/apiserver.go:66" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.582331 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv","openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-x68jp","openshift-ovn-kubernetes/ovnkube-node-wr4x4","openshift-dns/node-resolver-vk6p6","openshift-image-registry/node-ca-pvtml","openshift-multus/network-metrics-daemon-54w78","openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5","openshift-network-operator/iptables-alerter-5jnd7","openshift-kube-apiserver/kube-apiserver-crc","openshift-machine-config-operator/machine-config-daemon-8vxnt","openshift-multus/multus-additional-cni-plugins-lq9nf","openshift-multus/multus-dlvbf","openshift-network-diagnostics/network-check-target-fhkjl","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6","openshift-network-node-identity/network-node-identity-dgvkt"] Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.583442 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.584124 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 17:44:01 crc kubenswrapper[5118]: E1208 17:44:01.584204 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.585735 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"openshift-service-ca.crt\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.585993 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"kube-root-ca.crt\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.587219 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-operator\"/\"metrics-tls\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.587540 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 17:44:01 crc kubenswrapper[5118]: E1208 17:44:01.587616 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.588658 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.588658 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.589124 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 17:44:01 crc kubenswrapper[5118]: E1208 17:44:01.589199 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.592381 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"env-overrides\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.592467 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"ovnkube-identity-cm\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.592469 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"kube-root-ca.crt\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.592840 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"openshift-service-ca.crt\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.594007 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-script\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.596212 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-node-identity\"/\"network-node-identity-cert\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.603317 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-pvtml" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.605855 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.607855 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-54w78" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.607898 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\"" Dec 08 17:44:01 crc kubenswrapper[5118]: E1208 17:44:01.607998 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-54w78" podUID="e666ddb1-3625-4468-9d05-21215b5041c1" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.608235 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"image-registry-certificates\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.608613 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"node-ca-dockercfg-tjs74\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.610038 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"kube-root-ca.crt\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.613591 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-8vxnt" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.616030 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"proxy-tls\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.616050 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-rbac-proxy\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.616377 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"openshift-service-ca.crt\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.616117 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-root-ca.crt\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.616230 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-daemon-dockercfg-w9nzh\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.616990 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-lq9nf" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.619399 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"kube-root-ca.crt\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.619632 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"openshift-service-ca.crt\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.620123 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"whereabouts-flatfile-config\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.620129 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-copy-resources\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.620650 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.620187 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-nwglk\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.622408 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-dlvbf" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.623268 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.624141 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"multus-daemon-config\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.624325 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"default-dockercfg-g6kgg\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.624458 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-vk6p6" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.626897 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-wr4x4" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.627687 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"node-resolver-dockercfg-tk7bt\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.628022 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"kube-root-ca.crt\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.628174 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"openshift-service-ca.crt\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.629474 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-x68jp" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.630100 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-l2v2m\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.630205 5118 scope.go:117] "RemoveContainer" containerID="09a55b9dd89de217aa828b7f964664fff12b69580598e02e122e83d05b141077" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.630325 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\"" Dec 08 17:44:01 crc kubenswrapper[5118]: E1208 17:44:01.630395 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.630450 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.630467 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.630481 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"env-overrides\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.630445 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.630694 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.631813 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-control-plane-dockercfg-nl8tp\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.632552 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-control-plane-metrics-cert\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.639577 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.652095 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.657175 5118 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.661641 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.664004 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.664045 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.664057 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.664077 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.664089 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:44:01Z","lastTransitionTime":"2025-12-08T17:44:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.666555 5118 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-etcd/etcd-crc" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.666828 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.671399 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.678669 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.688951 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pvtml" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a1c0a2f-d8ef-48d5-90d0-9d8fb12e8a00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kg6bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:44:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pvtml\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.696574 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b662f523-82ac-4c94-9943-bcf7d574ae9a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f4eadc29321fff86ab58be2c14459298a72ab5e872e7059d7e3d0bc5492c9504\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:42:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://dd17024ed34c33fec3e296a713e5e14bef7dcfa92e492b6f8bddb105a5f0d9d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd17024ed34c33fec3e296a713e5e14bef7dcfa92e492b6f8bddb105a5f0d9d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T17:42:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T17:42:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:42:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.706547 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f7b68f5-a899-42d5-9245-79c431d8129c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f3440b4452b60509e3e040cf572e5e60894bc9a6567e9de291b7da6fdf682fa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:42:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://fe54a700b3e33ab05122ea3de8d4c794b951deec933a88704f0bcb0ffa22893f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:42:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://677558b2bc6e63519df956a2a4d0a56e3e5bd5be9da243e1cf7f8c93a152c362\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:42:25Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://33c98a2d2f7f5ee864d29cb06b16d3fd3fbc99f98965d7c3a2c0f34bae6545c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33c98a2d2f7f5ee864d29cb06b16d3fd3fbc99f98965d7c3a2c0f34bae6545c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T17:42:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T17:42:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:42:23Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.715720 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.724715 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.732948 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.739981 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.740018 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.740037 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.740079 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.740095 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.740114 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xnxbn\" (UniqueName: \"kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.740129 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.740143 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pddnv\" (UniqueName: \"kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv\") pod \"e093be35-bb62-4843-b2e8-094545761610\" (UID: \"e093be35-bb62-4843-b2e8-094545761610\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.740140 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8vxnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cee6a3dc-47d4-4996-9c78-cb6c6b626d71\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnmrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnmrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:44:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8vxnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.740426 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.740490 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.740521 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.740550 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.740575 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.740609 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m5lgh\" (UniqueName: \"kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.740633 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.740656 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.740678 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mfzkj\" (UniqueName: \"kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj\") pod \"0effdbcf-dd7d-404d-9d48-77536d665a5d\" (UID: \"0effdbcf-dd7d-404d-9d48-77536d665a5d\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.740700 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grwfz\" (UniqueName: \"kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.740725 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m26jq\" (UniqueName: \"kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.740749 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.740771 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.740765 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn" (OuterVolumeSpecName: "kube-api-access-xnxbn") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "kube-api-access-xnxbn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.740798 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.740825 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.740848 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert\") pod \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\" (UID: \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.740892 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nmmzf\" (UniqueName: \"kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.740913 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.740934 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5rsr\" (UniqueName: \"kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.741110 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv" (OuterVolumeSpecName: "kube-api-access-pddnv") pod "e093be35-bb62-4843-b2e8-094545761610" (UID: "e093be35-bb62-4843-b2e8-094545761610"). InnerVolumeSpecName "kube-api-access-pddnv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.741265 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.741826 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz" (OuterVolumeSpecName: "kube-api-access-grwfz") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "kube-api-access-grwfz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.741889 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.741899 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.741909 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.741933 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config" (OuterVolumeSpecName: "config") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.742002 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images" (OuterVolumeSpecName: "images") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.742046 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.742486 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.742559 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert" (OuterVolumeSpecName: "cert") pod "a52afe44-fb37-46ed-a1f8-bf39727a3cbe" (UID: "a52afe44-fb37-46ed-a1f8-bf39727a3cbe"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.742597 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr" (OuterVolumeSpecName: "kube-api-access-z5rsr") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "kube-api-access-z5rsr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.742684 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj" (OuterVolumeSpecName: "kube-api-access-mfzkj") pod "0effdbcf-dd7d-404d-9d48-77536d665a5d" (UID: "0effdbcf-dd7d-404d-9d48-77536d665a5d"). InnerVolumeSpecName "kube-api-access-mfzkj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.742786 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config" (OuterVolumeSpecName: "config") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.742925 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf" (OuterVolumeSpecName: "kube-api-access-nmmzf") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "kube-api-access-nmmzf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.742989 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq" (OuterVolumeSpecName: "kube-api-access-m26jq") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "kube-api-access-m26jq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.740959 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.743347 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.743378 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.743397 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.743415 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.743430 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.743194 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp" (OuterVolumeSpecName: "tmp") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.743234 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh" (OuterVolumeSpecName: "kube-api-access-m5lgh") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "kube-api-access-m5lgh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.743493 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.743507 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities" (OuterVolumeSpecName: "utilities") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.743522 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.743571 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.743604 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.743629 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.743652 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.743679 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.743705 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nspp\" (UniqueName: \"kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.743867 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99zj9\" (UniqueName: \"kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.743917 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.743945 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.743968 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l87hs\" (UniqueName: \"kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs\") pod \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\" (UID: \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.743995 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.743945 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.744023 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.744434 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.744457 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zg8nc\" (UniqueName: \"kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.744497 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.744517 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.744533 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.744554 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.744571 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6dmhf\" (UniqueName: \"kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.744587 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.744603 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.744622 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.744639 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.744643 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.744656 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc" (OuterVolumeSpecName: "kube-api-access-zg8nc") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "kube-api-access-zg8nc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.744654 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.744705 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls\") pod \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\" (UID: \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.744745 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.744776 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.744782 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.744850 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.745050 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.745079 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.745196 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.745271 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.745321 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.745380 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config" (OuterVolumeSpecName: "config") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.745383 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key" (OuterVolumeSpecName: "signing-key") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.745423 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.745489 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.745533 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.745488 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9" (OuterVolumeSpecName: "kube-api-access-99zj9") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "kube-api-access-99zj9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.745457 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config" (OuterVolumeSpecName: "config") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.745570 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pgx6b\" (UniqueName: \"kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b\") pod \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\" (UID: \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.745572 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs" (OuterVolumeSpecName: "kube-api-access-l87hs") pod "5ebfebf6-3ecd-458e-943f-bb25b52e2718" (UID: "5ebfebf6-3ecd-458e-943f-bb25b52e2718"). InnerVolumeSpecName "kube-api-access-l87hs". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.745609 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.745646 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4hb7m\" (UniqueName: \"kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.745677 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.745679 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap" (OuterVolumeSpecName: "whereabouts-flatfile-configmap") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "whereabouts-flatfile-configmap". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.745711 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.745818 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tkdh6\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.745839 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-twvbl\" (UniqueName: \"kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.745922 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.745806 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf" (OuterVolumeSpecName: "kube-api-access-6dmhf") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "kube-api-access-6dmhf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.745792 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.746000 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca" (OuterVolumeSpecName: "client-ca") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.746283 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp" (OuterVolumeSpecName: "kube-api-access-8nspp") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "kube-api-access-8nspp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.746321 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities" (OuterVolumeSpecName: "utilities") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.746334 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "6ee8fbd3-1f81-4666-96da-5afc70819f1a" (UID: "6ee8fbd3-1f81-4666-96da-5afc70819f1a"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.746371 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp" (OuterVolumeSpecName: "tmp") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.747042 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m" (OuterVolumeSpecName: "kube-api-access-4hb7m") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "kube-api-access-4hb7m". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.747043 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.746444 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.746732 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.746741 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.747380 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.747418 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.747379 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.747469 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.747524 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.747557 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.747582 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.747607 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9vsz9\" (UniqueName: \"kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.747637 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.747659 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.747681 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4smf\" (UniqueName: \"kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf\") pod \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\" (UID: \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.747706 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zsb9b\" (UniqueName: \"kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.747730 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.747752 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.747774 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.747797 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.747825 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dztfv\" (UniqueName: \"kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.747848 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.747870 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.747913 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.747935 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.747960 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ws8zz\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.747984 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.748007 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.748035 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7cps\" (UniqueName: \"kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps\") pod \"af41de71-79cf-4590-bbe9-9e8b848862cb\" (UID: \"af41de71-79cf-4590-bbe9-9e8b848862cb\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.748058 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w94wk\" (UniqueName: \"kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.748079 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qgrkj\" (UniqueName: \"kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj\") pod \"42a11a02-47e1-488f-b270-2679d3298b0e\" (UID: \"42a11a02-47e1-488f-b270-2679d3298b0e\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.748104 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.748127 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.748151 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.748180 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.748201 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.748224 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.748246 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nb9c\" (UniqueName: \"kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.748289 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.748314 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wj4qr\" (UniqueName: \"kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.748343 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.748373 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.748398 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.748424 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.748451 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.748476 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.748499 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.748523 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.747953 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.748053 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.748133 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl" (OuterVolumeSpecName: "kube-api-access-twvbl") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "kube-api-access-twvbl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.748357 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config" (OuterVolumeSpecName: "config") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.748328 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.748388 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.748319 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.748564 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config" (OuterVolumeSpecName: "config") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.749261 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.749492 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv" (OuterVolumeSpecName: "kube-api-access-dztfv") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "kube-api-access-dztfv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.749598 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities" (OuterVolumeSpecName: "utilities") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.749733 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps" (OuterVolumeSpecName: "kube-api-access-d7cps") pod "af41de71-79cf-4590-bbe9-9e8b848862cb" (UID: "af41de71-79cf-4590-bbe9-9e8b848862cb"). InnerVolumeSpecName "kube-api-access-d7cps". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.749776 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca" (OuterVolumeSpecName: "service-ca") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.749822 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.749862 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.750043 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptkcf\" (UniqueName: \"kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.750069 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.750120 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hm9x7\" (UniqueName: \"kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.750150 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9z4sw\" (UniqueName: \"kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.750173 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5lcfw\" (UniqueName: \"kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.750290 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.750317 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.750336 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.750384 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.750522 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.750625 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9" (OuterVolumeSpecName: "kube-api-access-9vsz9") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "kube-api-access-9vsz9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.750772 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.750636 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.750951 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c" (OuterVolumeSpecName: "kube-api-access-8nb9c") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "kube-api-access-8nb9c". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.750980 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.751012 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.751098 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config" (OuterVolumeSpecName: "console-config") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.751111 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca" (OuterVolumeSpecName: "client-ca") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.751162 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf" (OuterVolumeSpecName: "kube-api-access-q4smf") pod "0dd0fbac-8c0d-4228-8faa-abbeedabf7db" (UID: "0dd0fbac-8c0d-4228-8faa-abbeedabf7db"). InnerVolumeSpecName "kube-api-access-q4smf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.751205 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.751165 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.751257 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.751324 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.751356 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.751366 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.751380 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.751398 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.751406 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs\") pod \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\" (UID: \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.751432 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.751432 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca" (OuterVolumeSpecName: "service-ca") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.751711 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.751720 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.751744 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.751778 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zth6t\" (UniqueName: \"kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.751788 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs" (OuterVolumeSpecName: "certs") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.751804 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.751831 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.751956 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.752322 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.752336 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lq9nf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca4a524a-a1cb-4e10-8765-aa38225d2de3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:44:01Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ckfdv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ckfdv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ckfdv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ckfdv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ckfdv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ckfdv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ckfdv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:44:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lq9nf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.751860 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.752460 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.752494 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 08 17:44:01 crc kubenswrapper[5118]: E1208 17:44:01.752537 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:02.252514008 +0000 UTC m=+99.153838102 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.752560 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.752589 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94l9h\" (UniqueName: \"kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.752615 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hckvg\" (UniqueName: \"kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.752643 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.752663 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzt4w\" (UniqueName: \"kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w\") pod \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\" (UID: \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.752679 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.752697 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca\") pod \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\" (UID: \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.752693 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.752715 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.752725 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.752734 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mjwtd\" (UniqueName: \"kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.752754 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.752772 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.752789 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4g8ts\" (UniqueName: \"kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.752787 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.752806 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qqbfk\" (UniqueName: \"kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.752833 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.752857 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.752923 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.752953 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.752982 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.753007 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.753030 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.753052 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.753075 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.753100 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.753127 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.753252 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.753289 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.753313 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.753341 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.753368 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.753394 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.753419 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.753570 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pllx6\" (UniqueName: \"kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.753603 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xxfcv\" (UniqueName: \"kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.753633 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4tqq\" (UniqueName: \"kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq\") pod \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\" (UID: \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.753660 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.753686 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8pskd\" (UniqueName: \"kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.753712 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.753738 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert\") pod \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\" (UID: \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.753822 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9stx\" (UniqueName: \"kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.753847 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-26xrl\" (UniqueName: \"kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl\") pod \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\" (UID: \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.753872 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.753929 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.753953 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ftwb6\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.753976 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.753998 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.754020 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs\") pod \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\" (UID: \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.754042 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.754067 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.754092 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfp5s\" (UniqueName: \"kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.754116 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.754139 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.754163 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.754192 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.754218 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g4lr\" (UniqueName: \"kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr\") pod \"f7e2c886-118e-43bb-bef1-c78134de392b\" (UID: \"f7e2c886-118e-43bb-bef1-c78134de392b\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.754245 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.754271 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.754298 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.754325 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.754351 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.754378 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7jjkz\" (UniqueName: \"kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.754403 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.754434 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.754461 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.754485 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.754509 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.754530 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wbmqg\" (UniqueName: \"kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.754547 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.754570 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.754588 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tknt7\" (UniqueName: \"kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.754607 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.754626 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbc2l\" (UniqueName: \"kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.755861 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.755911 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir\") pod \"f7e2c886-118e-43bb-bef1-c78134de392b\" (UID: \"f7e2c886-118e-43bb-bef1-c78134de392b\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.755961 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.755981 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.756004 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.756024 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ks6v2\" (UniqueName: \"kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.756043 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls\") pod \"42a11a02-47e1-488f-b270-2679d3298b0e\" (UID: \"42a11a02-47e1-488f-b270-2679d3298b0e\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.756061 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.756080 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.756102 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.756122 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.756140 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6rmnv\" (UniqueName: \"kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.756168 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddlk9\" (UniqueName: \"kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.756230 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cf2ef9c8-a4c9-4273-9236-43ea37f1e277-host-cni-bin\") pod \"ovnkube-node-wr4x4\" (UID: \"cf2ef9c8-a4c9-4273-9236-43ea37f1e277\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr4x4" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.756256 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.756273 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.756295 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-m7xz2\" (UniqueName: \"kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.756323 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/cf2ef9c8-a4c9-4273-9236-43ea37f1e277-host-kubelet\") pod \"ovnkube-node-wr4x4\" (UID: \"cf2ef9c8-a4c9-4273-9236-43ea37f1e277\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr4x4" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.756358 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/cf2ef9c8-a4c9-4273-9236-43ea37f1e277-ovnkube-script-lib\") pod \"ovnkube-node-wr4x4\" (UID: \"cf2ef9c8-a4c9-4273-9236-43ea37f1e277\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr4x4" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.756378 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5a1c0a2f-d8ef-48d5-90d0-9d8fb12e8a00-host\") pod \"node-ca-pvtml\" (UID: \"5a1c0a2f-d8ef-48d5-90d0-9d8fb12e8a00\") " pod="openshift-image-registry/node-ca-pvtml" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.756395 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kg6bq\" (UniqueName: \"kubernetes.io/projected/5a1c0a2f-d8ef-48d5-90d0-9d8fb12e8a00-kube-api-access-kg6bq\") pod \"node-ca-pvtml\" (UID: \"5a1c0a2f-d8ef-48d5-90d0-9d8fb12e8a00\") " pod="openshift-image-registry/node-ca-pvtml" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.756425 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/cee6a3dc-47d4-4996-9c78-cb6c6b626d71-proxy-tls\") pod \"machine-config-daemon-8vxnt\" (UID: \"cee6a3dc-47d4-4996-9c78-cb6c6b626d71\") " pod="openshift-machine-config-operator/machine-config-daemon-8vxnt" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.756445 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.756463 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ca4a524a-a1cb-4e10-8765-aa38225d2de3-tuning-conf-dir\") pod \"multus-additional-cni-plugins-lq9nf\" (UID: \"ca4a524a-a1cb-4e10-8765-aa38225d2de3\") " pod="openshift-multus/multus-additional-cni-plugins-lq9nf" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.756485 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a091751f-234c-43ee-8324-ebb98bb3ec36-multus-cni-dir\") pod \"multus-dlvbf\" (UID: \"a091751f-234c-43ee-8324-ebb98bb3ec36\") " pod="openshift-multus/multus-dlvbf" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.756501 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/a091751f-234c-43ee-8324-ebb98bb3ec36-multus-socket-dir-parent\") pod \"multus-dlvbf\" (UID: \"a091751f-234c-43ee-8324-ebb98bb3ec36\") " pod="openshift-multus/multus-dlvbf" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.756517 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a091751f-234c-43ee-8324-ebb98bb3ec36-host-var-lib-cni-bin\") pod \"multus-dlvbf\" (UID: \"a091751f-234c-43ee-8324-ebb98bb3ec36\") " pod="openshift-multus/multus-dlvbf" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.756533 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdncr\" (UniqueName: \"kubernetes.io/projected/a091751f-234c-43ee-8324-ebb98bb3ec36-kube-api-access-tdncr\") pod \"multus-dlvbf\" (UID: \"a091751f-234c-43ee-8324-ebb98bb3ec36\") " pod="openshift-multus/multus-dlvbf" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.756552 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dsgwk\" (UniqueName: \"kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.756570 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cf2ef9c8-a4c9-4273-9236-43ea37f1e277-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-wr4x4\" (UID: \"cf2ef9c8-a4c9-4273-9236-43ea37f1e277\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr4x4" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.756587 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cf2ef9c8-a4c9-4273-9236-43ea37f1e277-env-overrides\") pod \"ovnkube-node-wr4x4\" (UID: \"cf2ef9c8-a4c9-4273-9236-43ea37f1e277\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr4x4" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.756605 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/cee6a3dc-47d4-4996-9c78-cb6c6b626d71-mcd-auth-proxy-config\") pod \"machine-config-daemon-8vxnt\" (UID: \"cee6a3dc-47d4-4996-9c78-cb6c6b626d71\") " pod="openshift-machine-config-operator/machine-config-daemon-8vxnt" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.756634 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/94c49b3d-dce8-4a73-895a-32a521a06b22-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-x68jp\" (UID: \"94c49b3d-dce8-4a73-895a-32a521a06b22\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-x68jp" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.756651 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ca4a524a-a1cb-4e10-8765-aa38225d2de3-cni-binary-copy\") pod \"multus-additional-cni-plugins-lq9nf\" (UID: \"ca4a524a-a1cb-4e10-8765-aa38225d2de3\") " pod="openshift-multus/multus-additional-cni-plugins-lq9nf" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.756669 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/cf2ef9c8-a4c9-4273-9236-43ea37f1e277-host-slash\") pod \"ovnkube-node-wr4x4\" (UID: \"cf2ef9c8-a4c9-4273-9236-43ea37f1e277\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr4x4" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.756688 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/94c49b3d-dce8-4a73-895a-32a521a06b22-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-x68jp\" (UID: \"94c49b3d-dce8-4a73-895a-32a521a06b22\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-x68jp" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.756711 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/cee6a3dc-47d4-4996-9c78-cb6c6b626d71-rootfs\") pod \"machine-config-daemon-8vxnt\" (UID: \"cee6a3dc-47d4-4996-9c78-cb6c6b626d71\") " pod="openshift-machine-config-operator/machine-config-daemon-8vxnt" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.756742 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.756767 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a091751f-234c-43ee-8324-ebb98bb3ec36-system-cni-dir\") pod \"multus-dlvbf\" (UID: \"a091751f-234c-43ee-8324-ebb98bb3ec36\") " pod="openshift-multus/multus-dlvbf" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.756785 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a091751f-234c-43ee-8324-ebb98bb3ec36-host-run-netns\") pod \"multus-dlvbf\" (UID: \"a091751f-234c-43ee-8324-ebb98bb3ec36\") " pod="openshift-multus/multus-dlvbf" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.756801 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cf2ef9c8-a4c9-4273-9236-43ea37f1e277-ovnkube-config\") pod \"ovnkube-node-wr4x4\" (UID: \"cf2ef9c8-a4c9-4273-9236-43ea37f1e277\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr4x4" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.756823 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.756849 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.756920 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.756949 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/a091751f-234c-43ee-8324-ebb98bb3ec36-os-release\") pod \"multus-dlvbf\" (UID: \"a091751f-234c-43ee-8324-ebb98bb3ec36\") " pod="openshift-multus/multus-dlvbf" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.756967 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cf2ef9c8-a4c9-4273-9236-43ea37f1e277-host-run-netns\") pod \"ovnkube-node-wr4x4\" (UID: \"cf2ef9c8-a4c9-4273-9236-43ea37f1e277\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr4x4" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.756983 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/cf2ef9c8-a4c9-4273-9236-43ea37f1e277-run-ovn\") pod \"ovnkube-node-wr4x4\" (UID: \"cf2ef9c8-a4c9-4273-9236-43ea37f1e277\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr4x4" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.757000 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/cf2ef9c8-a4c9-4273-9236-43ea37f1e277-node-log\") pod \"ovnkube-node-wr4x4\" (UID: \"cf2ef9c8-a4c9-4273-9236-43ea37f1e277\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr4x4" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.757020 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/ca4a524a-a1cb-4e10-8765-aa38225d2de3-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-lq9nf\" (UID: \"ca4a524a-a1cb-4e10-8765-aa38225d2de3\") " pod="openshift-multus/multus-additional-cni-plugins-lq9nf" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.757036 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/b10e1655-f317-439b-8188-cbfbebc4d756-tmp-dir\") pod \"node-resolver-vk6p6\" (UID: \"b10e1655-f317-439b-8188-cbfbebc4d756\") " pod="openshift-dns/node-resolver-vk6p6" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.757069 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.757118 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-848dl\" (UniqueName: \"kubernetes.io/projected/e666ddb1-3625-4468-9d05-21215b5041c1-kube-api-access-848dl\") pod \"network-metrics-daemon-54w78\" (UID: \"e666ddb1-3625-4468-9d05-21215b5041c1\") " pod="openshift-multus/network-metrics-daemon-54w78" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.757135 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ca4a524a-a1cb-4e10-8765-aa38225d2de3-system-cni-dir\") pod \"multus-additional-cni-plugins-lq9nf\" (UID: \"ca4a524a-a1cb-4e10-8765-aa38225d2de3\") " pod="openshift-multus/multus-additional-cni-plugins-lq9nf" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.757156 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/a091751f-234c-43ee-8324-ebb98bb3ec36-host-run-k8s-cni-cncf-io\") pod \"multus-dlvbf\" (UID: \"a091751f-234c-43ee-8324-ebb98bb3ec36\") " pod="openshift-multus/multus-dlvbf" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.757174 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/a091751f-234c-43ee-8324-ebb98bb3ec36-host-var-lib-kubelet\") pod \"multus-dlvbf\" (UID: \"a091751f-234c-43ee-8324-ebb98bb3ec36\") " pod="openshift-multus/multus-dlvbf" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.757190 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/cf2ef9c8-a4c9-4273-9236-43ea37f1e277-systemd-units\") pod \"ovnkube-node-wr4x4\" (UID: \"cf2ef9c8-a4c9-4273-9236-43ea37f1e277\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr4x4" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.757209 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/a091751f-234c-43ee-8324-ebb98bb3ec36-cni-binary-copy\") pod \"multus-dlvbf\" (UID: \"a091751f-234c-43ee-8324-ebb98bb3ec36\") " pod="openshift-multus/multus-dlvbf" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.757226 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/a091751f-234c-43ee-8324-ebb98bb3ec36-host-var-lib-cni-multus\") pod \"multus-dlvbf\" (UID: \"a091751f-234c-43ee-8324-ebb98bb3ec36\") " pod="openshift-multus/multus-dlvbf" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.757254 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/cf2ef9c8-a4c9-4273-9236-43ea37f1e277-run-systemd\") pod \"ovnkube-node-wr4x4\" (UID: \"cf2ef9c8-a4c9-4273-9236-43ea37f1e277\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr4x4" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.757270 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cf2ef9c8-a4c9-4273-9236-43ea37f1e277-var-lib-openvswitch\") pod \"ovnkube-node-wr4x4\" (UID: \"cf2ef9c8-a4c9-4273-9236-43ea37f1e277\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr4x4" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.757286 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cf2ef9c8-a4c9-4273-9236-43ea37f1e277-ovn-node-metrics-cert\") pod \"ovnkube-node-wr4x4\" (UID: \"cf2ef9c8-a4c9-4273-9236-43ea37f1e277\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr4x4" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.757303 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5a1c0a2f-d8ef-48d5-90d0-9d8fb12e8a00-serviceca\") pod \"node-ca-pvtml\" (UID: \"5a1c0a2f-d8ef-48d5-90d0-9d8fb12e8a00\") " pod="openshift-image-registry/node-ca-pvtml" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.757340 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e666ddb1-3625-4468-9d05-21215b5041c1-metrics-certs\") pod \"network-metrics-daemon-54w78\" (UID: \"e666ddb1-3625-4468-9d05-21215b5041c1\") " pod="openshift-multus/network-metrics-daemon-54w78" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.757365 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7svb\" (UniqueName: \"kubernetes.io/projected/94c49b3d-dce8-4a73-895a-32a521a06b22-kube-api-access-s7svb\") pod \"ovnkube-control-plane-57b78d8988-x68jp\" (UID: \"94c49b3d-dce8-4a73-895a-32a521a06b22\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-x68jp" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.757387 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckfdv\" (UniqueName: \"kubernetes.io/projected/ca4a524a-a1cb-4e10-8765-aa38225d2de3-kube-api-access-ckfdv\") pod \"multus-additional-cni-plugins-lq9nf\" (UID: \"ca4a524a-a1cb-4e10-8765-aa38225d2de3\") " pod="openshift-multus/multus-additional-cni-plugins-lq9nf" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.757406 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/a091751f-234c-43ee-8324-ebb98bb3ec36-host-run-multus-certs\") pod \"multus-dlvbf\" (UID: \"a091751f-234c-43ee-8324-ebb98bb3ec36\") " pod="openshift-multus/multus-dlvbf" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.757425 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cf2ef9c8-a4c9-4273-9236-43ea37f1e277-etc-openvswitch\") pod \"ovnkube-node-wr4x4\" (UID: \"cf2ef9c8-a4c9-4273-9236-43ea37f1e277\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr4x4" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.757476 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cf2ef9c8-a4c9-4273-9236-43ea37f1e277-host-run-ovn-kubernetes\") pod \"ovnkube-node-wr4x4\" (UID: \"cf2ef9c8-a4c9-4273-9236-43ea37f1e277\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr4x4" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.757497 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ca4a524a-a1cb-4e10-8765-aa38225d2de3-cnibin\") pod \"multus-additional-cni-plugins-lq9nf\" (UID: \"ca4a524a-a1cb-4e10-8765-aa38225d2de3\") " pod="openshift-multus/multus-additional-cni-plugins-lq9nf" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.757514 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/a091751f-234c-43ee-8324-ebb98bb3ec36-hostroot\") pod \"multus-dlvbf\" (UID: \"a091751f-234c-43ee-8324-ebb98bb3ec36\") " pod="openshift-multus/multus-dlvbf" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.757530 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cf2ef9c8-a4c9-4273-9236-43ea37f1e277-run-openvswitch\") pod \"ovnkube-node-wr4x4\" (UID: \"cf2ef9c8-a4c9-4273-9236-43ea37f1e277\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr4x4" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.757550 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ftnr\" (UniqueName: \"kubernetes.io/projected/cf2ef9c8-a4c9-4273-9236-43ea37f1e277-kube-api-access-2ftnr\") pod \"ovnkube-node-wr4x4\" (UID: \"cf2ef9c8-a4c9-4273-9236-43ea37f1e277\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr4x4" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.757569 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.757587 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.758458 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.758514 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ca4a524a-a1cb-4e10-8765-aa38225d2de3-os-release\") pod \"multus-additional-cni-plugins-lq9nf\" (UID: \"ca4a524a-a1cb-4e10-8765-aa38225d2de3\") " pod="openshift-multus/multus-additional-cni-plugins-lq9nf" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.758553 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/a091751f-234c-43ee-8324-ebb98bb3ec36-multus-daemon-config\") pod \"multus-dlvbf\" (UID: \"a091751f-234c-43ee-8324-ebb98bb3ec36\") " pod="openshift-multus/multus-dlvbf" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.758583 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fj2k8\" (UniqueName: \"kubernetes.io/projected/b10e1655-f317-439b-8188-cbfbebc4d756-kube-api-access-fj2k8\") pod \"node-resolver-vk6p6\" (UID: \"b10e1655-f317-439b-8188-cbfbebc4d756\") " pod="openshift-dns/node-resolver-vk6p6" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.758608 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cf2ef9c8-a4c9-4273-9236-43ea37f1e277-host-cni-netd\") pod \"ovnkube-node-wr4x4\" (UID: \"cf2ef9c8-a4c9-4273-9236-43ea37f1e277\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr4x4" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.758642 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnmrq\" (UniqueName: \"kubernetes.io/projected/cee6a3dc-47d4-4996-9c78-cb6c6b626d71-kube-api-access-gnmrq\") pod \"machine-config-daemon-8vxnt\" (UID: \"cee6a3dc-47d4-4996-9c78-cb6c6b626d71\") " pod="openshift-machine-config-operator/machine-config-daemon-8vxnt" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.758676 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8nt2j\" (UniqueName: \"kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.758706 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.758735 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/ca4a524a-a1cb-4e10-8765-aa38225d2de3-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-lq9nf\" (UID: \"ca4a524a-a1cb-4e10-8765-aa38225d2de3\") " pod="openshift-multus/multus-additional-cni-plugins-lq9nf" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.758762 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a091751f-234c-43ee-8324-ebb98bb3ec36-multus-conf-dir\") pod \"multus-dlvbf\" (UID: \"a091751f-234c-43ee-8324-ebb98bb3ec36\") " pod="openshift-multus/multus-dlvbf" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.758786 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a091751f-234c-43ee-8324-ebb98bb3ec36-etc-kubernetes\") pod \"multus-dlvbf\" (UID: \"a091751f-234c-43ee-8324-ebb98bb3ec36\") " pod="openshift-multus/multus-dlvbf" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.758814 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/94c49b3d-dce8-4a73-895a-32a521a06b22-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-x68jp\" (UID: \"94c49b3d-dce8-4a73-895a-32a521a06b22\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-x68jp" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.758838 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/a091751f-234c-43ee-8324-ebb98bb3ec36-cnibin\") pod \"multus-dlvbf\" (UID: \"a091751f-234c-43ee-8324-ebb98bb3ec36\") " pod="openshift-multus/multus-dlvbf" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.758863 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/b10e1655-f317-439b-8188-cbfbebc4d756-hosts-file\") pod \"node-resolver-vk6p6\" (UID: \"b10e1655-f317-439b-8188-cbfbebc4d756\") " pod="openshift-dns/node-resolver-vk6p6" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.758933 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/cf2ef9c8-a4c9-4273-9236-43ea37f1e277-log-socket\") pod \"ovnkube-node-wr4x4\" (UID: \"cf2ef9c8-a4c9-4273-9236-43ea37f1e277\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr4x4" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.759222 5118 reconciler_common.go:299] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.759276 5118 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.759386 5118 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.759411 5118 reconciler_common.go:299] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.759433 5118 reconciler_common.go:299] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.759467 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xnxbn\" (UniqueName: \"kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.759484 5118 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.759502 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pddnv\" (UniqueName: \"kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.759518 5118 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.759537 5118 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.759564 5118 reconciler_common.go:299] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.759588 5118 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.759606 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-m5lgh\" (UniqueName: \"kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.759624 5118 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.759641 5118 reconciler_common.go:299] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.759658 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mfzkj\" (UniqueName: \"kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.759678 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-grwfz\" (UniqueName: \"kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.759694 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-m26jq\" (UniqueName: \"kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.759711 5118 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.759737 5118 reconciler_common.go:299] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.759756 5118 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.759777 5118 reconciler_common.go:299] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.759794 5118 reconciler_common.go:299] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.759810 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nmmzf\" (UniqueName: \"kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.759825 5118 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.759841 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-z5rsr\" (UniqueName: \"kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.759856 5118 reconciler_common.go:299] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.759872 5118 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.759910 5118 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.759926 5118 reconciler_common.go:299] "Volume detached for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.759965 5118 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.759988 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.760005 5118 reconciler_common.go:299] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.760022 5118 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.760037 5118 reconciler_common.go:299] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.760052 5118 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.760069 5118 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.760096 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8nspp\" (UniqueName: \"kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.760115 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-99zj9\" (UniqueName: \"kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.760160 5118 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.760200 5118 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.760252 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l87hs\" (UniqueName: \"kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.760295 5118 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.760322 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zg8nc\" (UniqueName: \"kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.760351 5118 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.760384 5118 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.760414 5118 reconciler_common.go:299] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.760443 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6dmhf\" (UniqueName: \"kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.760481 5118 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.752981 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b" (OuterVolumeSpecName: "kube-api-access-pgx6b") pod "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" (UID: "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4"). InnerVolumeSpecName "kube-api-access-pgx6b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.753169 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.753318 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.753927 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.753938 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg" (OuterVolumeSpecName: "kube-api-access-hckvg") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "kube-api-access-hckvg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.754090 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp" (OuterVolumeSpecName: "tmp") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.754660 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.754771 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj" (OuterVolumeSpecName: "kube-api-access-qgrkj") pod "42a11a02-47e1-488f-b270-2679d3298b0e" (UID: "42a11a02-47e1-488f-b270-2679d3298b0e"). InnerVolumeSpecName "kube-api-access-qgrkj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.755265 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7" (OuterVolumeSpecName: "kube-api-access-hm9x7") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "kube-api-access-hm9x7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.755414 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf" (OuterVolumeSpecName: "kube-api-access-ptkcf") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "kube-api-access-ptkcf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.755425 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities" (OuterVolumeSpecName: "utilities") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.761380 5118 reconciler_common.go:299] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.761414 5118 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.761470 5118 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.761497 5118 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.761519 5118 reconciler_common.go:299] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.761588 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: E1208 17:44:01.762913 5118 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 08 17:44:01 crc kubenswrapper[5118]: E1208 17:44:01.763037 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-08 17:44:02.263017475 +0000 UTC m=+99.164341569 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.763199 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.763387 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7" (OuterVolumeSpecName: "kube-api-access-tknt7") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "kube-api-access-tknt7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.763407 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "42a11a02-47e1-488f-b270-2679d3298b0e" (UID: "42a11a02-47e1-488f-b270-2679d3298b0e"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.763427 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.763450 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.763617 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-dlvbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a091751f-234c-43ee-8324-ebb98bb3ec36\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdncr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:44:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dlvbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.763673 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.763677 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.764040 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.764100 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config" (OuterVolumeSpecName: "config") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.764114 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w" (OuterVolumeSpecName: "kube-api-access-rzt4w") pod "a52afe44-fb37-46ed-a1f8-bf39727a3cbe" (UID: "a52afe44-fb37-46ed-a1f8-bf39727a3cbe"). InnerVolumeSpecName "kube-api-access-rzt4w". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.764260 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6" (OuterVolumeSpecName: "kube-api-access-tkdh6") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "kube-api-access-tkdh6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.764338 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.764537 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l" (OuterVolumeSpecName: "kube-api-access-sbc2l") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "kube-api-access-sbc2l". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.764589 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.764656 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz" (OuterVolumeSpecName: "kube-api-access-ws8zz") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "kube-api-access-ws8zz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.764710 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.765025 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.764676 5118 reconciler_common.go:299] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.766704 5118 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.766722 5118 reconciler_common.go:299] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.766736 5118 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.766753 5118 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.766771 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4hb7m\" (UniqueName: \"kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.766787 5118 reconciler_common.go:299] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.766800 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-twvbl\" (UniqueName: \"kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.766814 5118 reconciler_common.go:299] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.766826 5118 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.766866 5118 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.766906 5118 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.766940 5118 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.766973 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9vsz9\" (UniqueName: \"kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.766986 5118 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.767000 5118 reconciler_common.go:299] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.767013 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-q4smf\" (UniqueName: \"kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.767027 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dztfv\" (UniqueName: \"kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.767786 5118 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.767812 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d7cps\" (UniqueName: \"kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.767827 5118 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.767961 5118 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.767973 5118 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.767982 5118 reconciler_common.go:299] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.768066 5118 reconciler_common.go:299] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.768080 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8nb9c\" (UniqueName: \"kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.768093 5118 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.768103 5118 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.768113 5118 reconciler_common.go:299] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.768123 5118 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.768133 5118 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.765421 5118 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.770829 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Dec 08 17:44:01 crc kubenswrapper[5118]: E1208 17:44:01.777394 5118 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 08 17:44:01 crc kubenswrapper[5118]: E1208 17:44:01.777688 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-08 17:44:02.277657084 +0000 UTC m=+99.178981178 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.777704 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.778079 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.778348 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.778395 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"264185e3-872b-4c02-a81a-b4ed66da2e56\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://88c46ed55f61960077efe0009a715acb46c877307a6d5e8d2bbb1b1c940351c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:42:25Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6fb32b04b12f0a1150681226f15f28429f8eb6bd7fa0a3b9d55412dc59619957\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:42:26Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://e46a7d797e71812da483916f6a5d5f9c04a83282a920a12bf84ab33b81c72425\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:42:25Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://09a55b9dd89de217aa828b7f964664fff12b69580598e02e122e83d05b141077\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09a55b9dd89de217aa828b7f964664fff12b69580598e02e122e83d05b141077\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-08T17:43:31Z\\\",\\\"message\\\":\\\"var.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"ClientsAllowCBOR\\\\\\\" enabled=false\\\\nW1208 17:43:31.230006 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1208 17:43:31.230148 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI1208 17:43:31.234143 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4028338799/tls.crt::/tmp/serving-cert-4028338799/tls.key\\\\\\\"\\\\nI1208 17:43:31.886595 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1208 17:43:31.890089 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1208 17:43:31.890118 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1208 17:43:31.890164 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1208 17:43:31.890177 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1208 17:43:31.896693 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1208 17:43:31.896718 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 17:43:31.896724 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 17:43:31.896730 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1208 17:43:31.896732 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1208 17:43:31.896735 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1208 17:43:31.896738 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1208 17:43:31.896773 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1208 17:43:31.898772 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-08T17:43:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://707ed6c728069793a2ce86dba7bbe414fa9c0bad4f7b3abf19fa593aeefd207e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:42:26Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://79d28781c46437a1fa8bbb18bad40812f011e8b4b26403d391ebe33b2f638fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79d28781c46437a1fa8bbb18bad40812f011e8b4b26403d391ebe33b2f638fce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T17:42:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T17:42:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:42:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.778539 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9" (OuterVolumeSpecName: "kube-api-access-ddlk9") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "kube-api-access-ddlk9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.778628 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.778679 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.778096 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t" (OuterVolumeSpecName: "kube-api-access-zth6t") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "kube-api-access-zth6t". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.778816 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.778843 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.778855 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.778894 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.778914 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:44:01Z","lastTransitionTime":"2025-12-08T17:44:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.779328 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.779423 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: E1208 17:44:01.779960 5118 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 08 17:44:01 crc kubenswrapper[5118]: E1208 17:44:01.779975 5118 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 08 17:44:01 crc kubenswrapper[5118]: E1208 17:44:01.779986 5118 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 17:44:01 crc kubenswrapper[5118]: E1208 17:44:01.780077 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2025-12-08 17:44:02.280059829 +0000 UTC m=+99.181383923 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.781450 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.781524 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.781610 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw" (OuterVolumeSpecName: "kube-api-access-9z4sw") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "kube-api-access-9z4sw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.781670 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.781749 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images" (OuterVolumeSpecName: "images") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.782672 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsgwk\" (UniqueName: \"kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.782800 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7xz2\" (UniqueName: \"kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.783156 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nt2j\" (UniqueName: \"kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.783671 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.785275 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config" (OuterVolumeSpecName: "config") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: E1208 17:44:01.785773 5118 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 08 17:44:01 crc kubenswrapper[5118]: E1208 17:44:01.785800 5118 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 08 17:44:01 crc kubenswrapper[5118]: E1208 17:44:01.785816 5118 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 17:44:01 crc kubenswrapper[5118]: E1208 17:44:01.785944 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2025-12-08 17:44:02.285923479 +0000 UTC m=+99.187247583 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.786036 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp" (OuterVolumeSpecName: "tmp") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.787703 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.787782 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.795271 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit" (OuterVolumeSpecName: "audit") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.795296 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b" (OuterVolumeSpecName: "kube-api-access-zsb9b") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "kube-api-access-zsb9b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.795490 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.795663 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.796047 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" (UID: "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.796117 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config" (OuterVolumeSpecName: "config") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.796335 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.797533 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.797663 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.797706 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr" (OuterVolumeSpecName: "kube-api-access-wj4qr") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "kube-api-access-wj4qr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.797777 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.797892 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config" (OuterVolumeSpecName: "config") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.797911 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config" (OuterVolumeSpecName: "config") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.797963 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.798032 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.798056 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config" (OuterVolumeSpecName: "config") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.798484 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.798816 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr" (OuterVolumeSpecName: "kube-api-access-6g4lr") pod "f7e2c886-118e-43bb-bef1-c78134de392b" (UID: "f7e2c886-118e-43bb-bef1-c78134de392b"). InnerVolumeSpecName "kube-api-access-6g4lr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.798949 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.799319 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h" (OuterVolumeSpecName: "kube-api-access-94l9h") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "kube-api-access-94l9h". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.799491 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "f7e2c886-118e-43bb-bef1-c78134de392b" (UID: "f7e2c886-118e-43bb-bef1-c78134de392b"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.799545 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.799590 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.799846 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk" (OuterVolumeSpecName: "kube-api-access-w94wk") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "kube-api-access-w94wk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.799964 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk" (OuterVolumeSpecName: "kube-api-access-qqbfk") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "kube-api-access-qqbfk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.800026 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp" (OuterVolumeSpecName: "tmp") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.800369 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.801067 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s" (OuterVolumeSpecName: "kube-api-access-xfp5s") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "kube-api-access-xfp5s". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.801239 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities" (OuterVolumeSpecName: "utilities") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.801288 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.803893 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "0dd0fbac-8c0d-4228-8faa-abbeedabf7db" (UID: "0dd0fbac-8c0d-4228-8faa-abbeedabf7db"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.804120 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.804770 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.804824 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv" (OuterVolumeSpecName: "kube-api-access-xxfcv") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "kube-api-access-xxfcv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.805299 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem" (OuterVolumeSpecName: "ca-trust-extracted-pem") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "ca-trust-extracted-pem". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.805820 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.806086 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.806627 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl" (OuterVolumeSpecName: "kube-api-access-26xrl") pod "a208c9c2-333b-4b4a-be0d-bc32ec38a821" (UID: "a208c9c2-333b-4b4a-be0d-bc32ec38a821"). InnerVolumeSpecName "kube-api-access-26xrl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.807228 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd" (OuterVolumeSpecName: "kube-api-access-8pskd") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "kube-api-access-8pskd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.807232 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.807572 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv" (OuterVolumeSpecName: "kube-api-access-6rmnv") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "kube-api-access-6rmnv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.807664 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2" (OuterVolumeSpecName: "kube-api-access-ks6v2") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "kube-api-access-ks6v2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.807705 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.807781 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities" (OuterVolumeSpecName: "utilities") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.807961 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.808015 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.808015 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.808101 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume" (OuterVolumeSpecName: "config-volume") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.808384 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities" (OuterVolumeSpecName: "utilities") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.808608 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg" (OuterVolumeSpecName: "kube-api-access-wbmqg") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "kube-api-access-wbmqg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.809050 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.809085 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.809497 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp" (OuterVolumeSpecName: "tmp") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.809824 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd" (OuterVolumeSpecName: "kube-api-access-mjwtd") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "kube-api-access-mjwtd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.809860 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.809943 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.809989 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.811803 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities" (OuterVolumeSpecName: "utilities") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.811832 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.812035 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.812045 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.812051 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.812299 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.812333 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.812518 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.812682 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.812725 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw" (OuterVolumeSpecName: "kube-api-access-5lcfw") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "kube-api-access-5lcfw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.812891 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6" (OuterVolumeSpecName: "kube-api-access-pllx6") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "kube-api-access-pllx6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.812993 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pvtml" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a1c0a2f-d8ef-48d5-90d0-9d8fb12e8a00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kg6bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:44:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pvtml\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.813027 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.813205 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca" (OuterVolumeSpecName: "serviceca") pod "5ebfebf6-3ecd-458e-943f-bb25b52e2718" (UID: "5ebfebf6-3ecd-458e-943f-bb25b52e2718"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.813278 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config" (OuterVolumeSpecName: "config") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.813926 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config" (OuterVolumeSpecName: "config") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.814025 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6" (OuterVolumeSpecName: "kube-api-access-ftwb6") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "kube-api-access-ftwb6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.814083 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.814268 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.814482 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.814501 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.814556 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq" (OuterVolumeSpecName: "kube-api-access-d4tqq") pod "6ee8fbd3-1f81-4666-96da-5afc70819f1a" (UID: "6ee8fbd3-1f81-4666-96da-5afc70819f1a"). InnerVolumeSpecName "kube-api-access-d4tqq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.814588 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz" (OuterVolumeSpecName: "kube-api-access-7jjkz") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "kube-api-access-7jjkz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.814683 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.814768 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.814978 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config" (OuterVolumeSpecName: "config") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.815080 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.815096 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.815239 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.815329 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.815511 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.815660 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx" (OuterVolumeSpecName: "kube-api-access-l9stx") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "kube-api-access-l9stx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.816249 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "a208c9c2-333b-4b4a-be0d-bc32ec38a821" (UID: "a208c9c2-333b-4b4a-be0d-bc32ec38a821"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.816559 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.816664 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts" (OuterVolumeSpecName: "kube-api-access-4g8ts") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "kube-api-access-4g8ts". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.820152 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.822917 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b215d7d-6627-489e-89b8-65a143e092da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://34a764685c0c2e874d1089a49379fd329c1f930a48eb0ddf1ef2647c6603393a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:42:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://cfca8d494212b21c8a44513c5dd06e44549b08479d7bf1138bd5fb15936ccee8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:42:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7d530b6dea7712c8b4797040bc123b6178ce49d36eaf7d649d8ed2d19ac499dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:42:25Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f4eb252f824da37bd546e141c2cc9badc7adb7d51a3ea2f9270311119403c238\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:42:25Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:42:23Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.828398 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.830602 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-vk6p6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b10e1655-f317-439b-8188-cbfbebc4d756\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fj2k8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:44:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vk6p6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.832193 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.843768 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.845315 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wr4x4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf2ef9c8-a4c9-4273-9236-43ea37f1e277\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:44:01Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ftnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ftnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ftnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ftnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ftnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ftnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ftnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ftnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ftnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:44:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wr4x4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.850341 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.854006 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-54w78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e666ddb1-3625-4468-9d05-21215b5041c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-848dl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-848dl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:44:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-54w78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.862490 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-x68jp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94c49b3d-dce8-4a73-895a-32a521a06b22\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7svb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7svb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:44:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-x68jp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.869136 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.869173 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/cf2ef9c8-a4c9-4273-9236-43ea37f1e277-host-kubelet\") pod \"ovnkube-node-wr4x4\" (UID: \"cf2ef9c8-a4c9-4273-9236-43ea37f1e277\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr4x4" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.869218 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.869242 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/cf2ef9c8-a4c9-4273-9236-43ea37f1e277-host-kubelet\") pod \"ovnkube-node-wr4x4\" (UID: \"cf2ef9c8-a4c9-4273-9236-43ea37f1e277\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr4x4" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.869289 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/cf2ef9c8-a4c9-4273-9236-43ea37f1e277-ovnkube-script-lib\") pod \"ovnkube-node-wr4x4\" (UID: \"cf2ef9c8-a4c9-4273-9236-43ea37f1e277\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr4x4" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.869351 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5a1c0a2f-d8ef-48d5-90d0-9d8fb12e8a00-host\") pod \"node-ca-pvtml\" (UID: \"5a1c0a2f-d8ef-48d5-90d0-9d8fb12e8a00\") " pod="openshift-image-registry/node-ca-pvtml" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.869373 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-kg6bq\" (UniqueName: \"kubernetes.io/projected/5a1c0a2f-d8ef-48d5-90d0-9d8fb12e8a00-kube-api-access-kg6bq\") pod \"node-ca-pvtml\" (UID: \"5a1c0a2f-d8ef-48d5-90d0-9d8fb12e8a00\") " pod="openshift-image-registry/node-ca-pvtml" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.869400 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/cee6a3dc-47d4-4996-9c78-cb6c6b626d71-proxy-tls\") pod \"machine-config-daemon-8vxnt\" (UID: \"cee6a3dc-47d4-4996-9c78-cb6c6b626d71\") " pod="openshift-machine-config-operator/machine-config-daemon-8vxnt" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.869422 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ca4a524a-a1cb-4e10-8765-aa38225d2de3-tuning-conf-dir\") pod \"multus-additional-cni-plugins-lq9nf\" (UID: \"ca4a524a-a1cb-4e10-8765-aa38225d2de3\") " pod="openshift-multus/multus-additional-cni-plugins-lq9nf" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.869446 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a091751f-234c-43ee-8324-ebb98bb3ec36-multus-cni-dir\") pod \"multus-dlvbf\" (UID: \"a091751f-234c-43ee-8324-ebb98bb3ec36\") " pod="openshift-multus/multus-dlvbf" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.869467 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/a091751f-234c-43ee-8324-ebb98bb3ec36-multus-socket-dir-parent\") pod \"multus-dlvbf\" (UID: \"a091751f-234c-43ee-8324-ebb98bb3ec36\") " pod="openshift-multus/multus-dlvbf" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.869488 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a091751f-234c-43ee-8324-ebb98bb3ec36-host-var-lib-cni-bin\") pod \"multus-dlvbf\" (UID: \"a091751f-234c-43ee-8324-ebb98bb3ec36\") " pod="openshift-multus/multus-dlvbf" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.869508 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tdncr\" (UniqueName: \"kubernetes.io/projected/a091751f-234c-43ee-8324-ebb98bb3ec36-kube-api-access-tdncr\") pod \"multus-dlvbf\" (UID: \"a091751f-234c-43ee-8324-ebb98bb3ec36\") " pod="openshift-multus/multus-dlvbf" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.869532 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cf2ef9c8-a4c9-4273-9236-43ea37f1e277-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-wr4x4\" (UID: \"cf2ef9c8-a4c9-4273-9236-43ea37f1e277\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr4x4" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.869547 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5a1c0a2f-d8ef-48d5-90d0-9d8fb12e8a00-host\") pod \"node-ca-pvtml\" (UID: \"5a1c0a2f-d8ef-48d5-90d0-9d8fb12e8a00\") " pod="openshift-image-registry/node-ca-pvtml" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.869554 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cf2ef9c8-a4c9-4273-9236-43ea37f1e277-env-overrides\") pod \"ovnkube-node-wr4x4\" (UID: \"cf2ef9c8-a4c9-4273-9236-43ea37f1e277\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr4x4" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.869578 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/cee6a3dc-47d4-4996-9c78-cb6c6b626d71-mcd-auth-proxy-config\") pod \"machine-config-daemon-8vxnt\" (UID: \"cee6a3dc-47d4-4996-9c78-cb6c6b626d71\") " pod="openshift-machine-config-operator/machine-config-daemon-8vxnt" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.869602 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/94c49b3d-dce8-4a73-895a-32a521a06b22-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-x68jp\" (UID: \"94c49b3d-dce8-4a73-895a-32a521a06b22\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-x68jp" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.869624 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ca4a524a-a1cb-4e10-8765-aa38225d2de3-cni-binary-copy\") pod \"multus-additional-cni-plugins-lq9nf\" (UID: \"ca4a524a-a1cb-4e10-8765-aa38225d2de3\") " pod="openshift-multus/multus-additional-cni-plugins-lq9nf" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.869644 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/cf2ef9c8-a4c9-4273-9236-43ea37f1e277-host-slash\") pod \"ovnkube-node-wr4x4\" (UID: \"cf2ef9c8-a4c9-4273-9236-43ea37f1e277\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr4x4" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.869665 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/94c49b3d-dce8-4a73-895a-32a521a06b22-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-x68jp\" (UID: \"94c49b3d-dce8-4a73-895a-32a521a06b22\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-x68jp" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.869688 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/cee6a3dc-47d4-4996-9c78-cb6c6b626d71-rootfs\") pod \"machine-config-daemon-8vxnt\" (UID: \"cee6a3dc-47d4-4996-9c78-cb6c6b626d71\") " pod="openshift-machine-config-operator/machine-config-daemon-8vxnt" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.869718 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a091751f-234c-43ee-8324-ebb98bb3ec36-system-cni-dir\") pod \"multus-dlvbf\" (UID: \"a091751f-234c-43ee-8324-ebb98bb3ec36\") " pod="openshift-multus/multus-dlvbf" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.869739 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a091751f-234c-43ee-8324-ebb98bb3ec36-host-run-netns\") pod \"multus-dlvbf\" (UID: \"a091751f-234c-43ee-8324-ebb98bb3ec36\") " pod="openshift-multus/multus-dlvbf" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.869763 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cf2ef9c8-a4c9-4273-9236-43ea37f1e277-ovnkube-config\") pod \"ovnkube-node-wr4x4\" (UID: \"cf2ef9c8-a4c9-4273-9236-43ea37f1e277\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr4x4" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.869797 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/a091751f-234c-43ee-8324-ebb98bb3ec36-os-release\") pod \"multus-dlvbf\" (UID: \"a091751f-234c-43ee-8324-ebb98bb3ec36\") " pod="openshift-multus/multus-dlvbf" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.869818 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cf2ef9c8-a4c9-4273-9236-43ea37f1e277-host-run-netns\") pod \"ovnkube-node-wr4x4\" (UID: \"cf2ef9c8-a4c9-4273-9236-43ea37f1e277\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr4x4" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.869838 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/cf2ef9c8-a4c9-4273-9236-43ea37f1e277-run-ovn\") pod \"ovnkube-node-wr4x4\" (UID: \"cf2ef9c8-a4c9-4273-9236-43ea37f1e277\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr4x4" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.869858 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/cf2ef9c8-a4c9-4273-9236-43ea37f1e277-node-log\") pod \"ovnkube-node-wr4x4\" (UID: \"cf2ef9c8-a4c9-4273-9236-43ea37f1e277\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr4x4" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.869898 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/ca4a524a-a1cb-4e10-8765-aa38225d2de3-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-lq9nf\" (UID: \"ca4a524a-a1cb-4e10-8765-aa38225d2de3\") " pod="openshift-multus/multus-additional-cni-plugins-lq9nf" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.869923 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/b10e1655-f317-439b-8188-cbfbebc4d756-tmp-dir\") pod \"node-resolver-vk6p6\" (UID: \"b10e1655-f317-439b-8188-cbfbebc4d756\") " pod="openshift-dns/node-resolver-vk6p6" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.869949 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.869971 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-848dl\" (UniqueName: \"kubernetes.io/projected/e666ddb1-3625-4468-9d05-21215b5041c1-kube-api-access-848dl\") pod \"network-metrics-daemon-54w78\" (UID: \"e666ddb1-3625-4468-9d05-21215b5041c1\") " pod="openshift-multus/network-metrics-daemon-54w78" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.870003 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ca4a524a-a1cb-4e10-8765-aa38225d2de3-system-cni-dir\") pod \"multus-additional-cni-plugins-lq9nf\" (UID: \"ca4a524a-a1cb-4e10-8765-aa38225d2de3\") " pod="openshift-multus/multus-additional-cni-plugins-lq9nf" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.870017 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/cf2ef9c8-a4c9-4273-9236-43ea37f1e277-host-slash\") pod \"ovnkube-node-wr4x4\" (UID: \"cf2ef9c8-a4c9-4273-9236-43ea37f1e277\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr4x4" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.870045 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ca4a524a-a1cb-4e10-8765-aa38225d2de3-system-cni-dir\") pod \"multus-additional-cni-plugins-lq9nf\" (UID: \"ca4a524a-a1cb-4e10-8765-aa38225d2de3\") " pod="openshift-multus/multus-additional-cni-plugins-lq9nf" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.870072 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cf2ef9c8-a4c9-4273-9236-43ea37f1e277-host-run-netns\") pod \"ovnkube-node-wr4x4\" (UID: \"cf2ef9c8-a4c9-4273-9236-43ea37f1e277\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr4x4" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.870083 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cf2ef9c8-a4c9-4273-9236-43ea37f1e277-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-wr4x4\" (UID: \"cf2ef9c8-a4c9-4273-9236-43ea37f1e277\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr4x4" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.870101 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/a091751f-234c-43ee-8324-ebb98bb3ec36-host-run-k8s-cni-cncf-io\") pod \"multus-dlvbf\" (UID: \"a091751f-234c-43ee-8324-ebb98bb3ec36\") " pod="openshift-multus/multus-dlvbf" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.870135 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/cee6a3dc-47d4-4996-9c78-cb6c6b626d71-rootfs\") pod \"machine-config-daemon-8vxnt\" (UID: \"cee6a3dc-47d4-4996-9c78-cb6c6b626d71\") " pod="openshift-machine-config-operator/machine-config-daemon-8vxnt" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.870267 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a091751f-234c-43ee-8324-ebb98bb3ec36-system-cni-dir\") pod \"multus-dlvbf\" (UID: \"a091751f-234c-43ee-8324-ebb98bb3ec36\") " pod="openshift-multus/multus-dlvbf" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.870290 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a091751f-234c-43ee-8324-ebb98bb3ec36-host-run-netns\") pod \"multus-dlvbf\" (UID: \"a091751f-234c-43ee-8324-ebb98bb3ec36\") " pod="openshift-multus/multus-dlvbf" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.870287 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/cf2ef9c8-a4c9-4273-9236-43ea37f1e277-ovnkube-script-lib\") pod \"ovnkube-node-wr4x4\" (UID: \"cf2ef9c8-a4c9-4273-9236-43ea37f1e277\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr4x4" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.870285 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ca4a524a-a1cb-4e10-8765-aa38225d2de3-tuning-conf-dir\") pod \"multus-additional-cni-plugins-lq9nf\" (UID: \"ca4a524a-a1cb-4e10-8765-aa38225d2de3\") " pod="openshift-multus/multus-additional-cni-plugins-lq9nf" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.870397 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a091751f-234c-43ee-8324-ebb98bb3ec36-multus-cni-dir\") pod \"multus-dlvbf\" (UID: \"a091751f-234c-43ee-8324-ebb98bb3ec36\") " pod="openshift-multus/multus-dlvbf" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.870448 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/a091751f-234c-43ee-8324-ebb98bb3ec36-multus-socket-dir-parent\") pod \"multus-dlvbf\" (UID: \"a091751f-234c-43ee-8324-ebb98bb3ec36\") " pod="openshift-multus/multus-dlvbf" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.870483 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a091751f-234c-43ee-8324-ebb98bb3ec36-host-var-lib-cni-bin\") pod \"multus-dlvbf\" (UID: \"a091751f-234c-43ee-8324-ebb98bb3ec36\") " pod="openshift-multus/multus-dlvbf" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.870517 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/a091751f-234c-43ee-8324-ebb98bb3ec36-os-release\") pod \"multus-dlvbf\" (UID: \"a091751f-234c-43ee-8324-ebb98bb3ec36\") " pod="openshift-multus/multus-dlvbf" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.870077 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/a091751f-234c-43ee-8324-ebb98bb3ec36-host-run-k8s-cni-cncf-io\") pod \"multus-dlvbf\" (UID: \"a091751f-234c-43ee-8324-ebb98bb3ec36\") " pod="openshift-multus/multus-dlvbf" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.870573 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/a091751f-234c-43ee-8324-ebb98bb3ec36-host-var-lib-kubelet\") pod \"multus-dlvbf\" (UID: \"a091751f-234c-43ee-8324-ebb98bb3ec36\") " pod="openshift-multus/multus-dlvbf" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.870596 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/cf2ef9c8-a4c9-4273-9236-43ea37f1e277-systemd-units\") pod \"ovnkube-node-wr4x4\" (UID: \"cf2ef9c8-a4c9-4273-9236-43ea37f1e277\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr4x4" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.870615 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/a091751f-234c-43ee-8324-ebb98bb3ec36-cni-binary-copy\") pod \"multus-dlvbf\" (UID: \"a091751f-234c-43ee-8324-ebb98bb3ec36\") " pod="openshift-multus/multus-dlvbf" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.870676 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/cf2ef9c8-a4c9-4273-9236-43ea37f1e277-node-log\") pod \"ovnkube-node-wr4x4\" (UID: \"cf2ef9c8-a4c9-4273-9236-43ea37f1e277\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr4x4" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.870682 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.870714 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/a091751f-234c-43ee-8324-ebb98bb3ec36-host-var-lib-cni-multus\") pod \"multus-dlvbf\" (UID: \"a091751f-234c-43ee-8324-ebb98bb3ec36\") " pod="openshift-multus/multus-dlvbf" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.870750 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/cf2ef9c8-a4c9-4273-9236-43ea37f1e277-run-systemd\") pod \"ovnkube-node-wr4x4\" (UID: \"cf2ef9c8-a4c9-4273-9236-43ea37f1e277\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr4x4" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.870774 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cf2ef9c8-a4c9-4273-9236-43ea37f1e277-var-lib-openvswitch\") pod \"ovnkube-node-wr4x4\" (UID: \"cf2ef9c8-a4c9-4273-9236-43ea37f1e277\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr4x4" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.870796 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cf2ef9c8-a4c9-4273-9236-43ea37f1e277-ovn-node-metrics-cert\") pod \"ovnkube-node-wr4x4\" (UID: \"cf2ef9c8-a4c9-4273-9236-43ea37f1e277\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr4x4" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.870815 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/a091751f-234c-43ee-8324-ebb98bb3ec36-host-var-lib-cni-multus\") pod \"multus-dlvbf\" (UID: \"a091751f-234c-43ee-8324-ebb98bb3ec36\") " pod="openshift-multus/multus-dlvbf" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.870819 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5a1c0a2f-d8ef-48d5-90d0-9d8fb12e8a00-serviceca\") pod \"node-ca-pvtml\" (UID: \"5a1c0a2f-d8ef-48d5-90d0-9d8fb12e8a00\") " pod="openshift-image-registry/node-ca-pvtml" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.870866 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e666ddb1-3625-4468-9d05-21215b5041c1-metrics-certs\") pod \"network-metrics-daemon-54w78\" (UID: \"e666ddb1-3625-4468-9d05-21215b5041c1\") " pod="openshift-multus/network-metrics-daemon-54w78" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.870917 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-s7svb\" (UniqueName: \"kubernetes.io/projected/94c49b3d-dce8-4a73-895a-32a521a06b22-kube-api-access-s7svb\") pod \"ovnkube-control-plane-57b78d8988-x68jp\" (UID: \"94c49b3d-dce8-4a73-895a-32a521a06b22\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-x68jp" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.870942 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ckfdv\" (UniqueName: \"kubernetes.io/projected/ca4a524a-a1cb-4e10-8765-aa38225d2de3-kube-api-access-ckfdv\") pod \"multus-additional-cni-plugins-lq9nf\" (UID: \"ca4a524a-a1cb-4e10-8765-aa38225d2de3\") " pod="openshift-multus/multus-additional-cni-plugins-lq9nf" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.870965 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/a091751f-234c-43ee-8324-ebb98bb3ec36-host-run-multus-certs\") pod \"multus-dlvbf\" (UID: \"a091751f-234c-43ee-8324-ebb98bb3ec36\") " pod="openshift-multus/multus-dlvbf" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.870987 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cf2ef9c8-a4c9-4273-9236-43ea37f1e277-etc-openvswitch\") pod \"ovnkube-node-wr4x4\" (UID: \"cf2ef9c8-a4c9-4273-9236-43ea37f1e277\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr4x4" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.871009 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cf2ef9c8-a4c9-4273-9236-43ea37f1e277-host-run-ovn-kubernetes\") pod \"ovnkube-node-wr4x4\" (UID: \"cf2ef9c8-a4c9-4273-9236-43ea37f1e277\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr4x4" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.871020 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cf2ef9c8-a4c9-4273-9236-43ea37f1e277-ovnkube-config\") pod \"ovnkube-node-wr4x4\" (UID: \"cf2ef9c8-a4c9-4273-9236-43ea37f1e277\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr4x4" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.871034 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ca4a524a-a1cb-4e10-8765-aa38225d2de3-cnibin\") pod \"multus-additional-cni-plugins-lq9nf\" (UID: \"ca4a524a-a1cb-4e10-8765-aa38225d2de3\") " pod="openshift-multus/multus-additional-cni-plugins-lq9nf" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.871084 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/cf2ef9c8-a4c9-4273-9236-43ea37f1e277-run-systemd\") pod \"ovnkube-node-wr4x4\" (UID: \"cf2ef9c8-a4c9-4273-9236-43ea37f1e277\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr4x4" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.871091 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/a091751f-234c-43ee-8324-ebb98bb3ec36-hostroot\") pod \"multus-dlvbf\" (UID: \"a091751f-234c-43ee-8324-ebb98bb3ec36\") " pod="openshift-multus/multus-dlvbf" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.871116 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cf2ef9c8-a4c9-4273-9236-43ea37f1e277-run-openvswitch\") pod \"ovnkube-node-wr4x4\" (UID: \"cf2ef9c8-a4c9-4273-9236-43ea37f1e277\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr4x4" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.871158 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/cee6a3dc-47d4-4996-9c78-cb6c6b626d71-mcd-auth-proxy-config\") pod \"machine-config-daemon-8vxnt\" (UID: \"cee6a3dc-47d4-4996-9c78-cb6c6b626d71\") " pod="openshift-machine-config-operator/machine-config-daemon-8vxnt" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.871171 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2ftnr\" (UniqueName: \"kubernetes.io/projected/cf2ef9c8-a4c9-4273-9236-43ea37f1e277-kube-api-access-2ftnr\") pod \"ovnkube-node-wr4x4\" (UID: \"cf2ef9c8-a4c9-4273-9236-43ea37f1e277\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr4x4" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.871201 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ca4a524a-a1cb-4e10-8765-aa38225d2de3-os-release\") pod \"multus-additional-cni-plugins-lq9nf\" (UID: \"ca4a524a-a1cb-4e10-8765-aa38225d2de3\") " pod="openshift-multus/multus-additional-cni-plugins-lq9nf" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.871220 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cf2ef9c8-a4c9-4273-9236-43ea37f1e277-etc-openvswitch\") pod \"ovnkube-node-wr4x4\" (UID: \"cf2ef9c8-a4c9-4273-9236-43ea37f1e277\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr4x4" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.871162 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/b10e1655-f317-439b-8188-cbfbebc4d756-tmp-dir\") pod \"node-resolver-vk6p6\" (UID: \"b10e1655-f317-439b-8188-cbfbebc4d756\") " pod="openshift-dns/node-resolver-vk6p6" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.871253 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/a091751f-234c-43ee-8324-ebb98bb3ec36-multus-daemon-config\") pod \"multus-dlvbf\" (UID: \"a091751f-234c-43ee-8324-ebb98bb3ec36\") " pod="openshift-multus/multus-dlvbf" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.871281 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-fj2k8\" (UniqueName: \"kubernetes.io/projected/b10e1655-f317-439b-8188-cbfbebc4d756-kube-api-access-fj2k8\") pod \"node-resolver-vk6p6\" (UID: \"b10e1655-f317-439b-8188-cbfbebc4d756\") " pod="openshift-dns/node-resolver-vk6p6" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.871286 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/a091751f-234c-43ee-8324-ebb98bb3ec36-host-var-lib-kubelet\") pod \"multus-dlvbf\" (UID: \"a091751f-234c-43ee-8324-ebb98bb3ec36\") " pod="openshift-multus/multus-dlvbf" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.871303 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/a091751f-234c-43ee-8324-ebb98bb3ec36-cni-binary-copy\") pod \"multus-dlvbf\" (UID: \"a091751f-234c-43ee-8324-ebb98bb3ec36\") " pod="openshift-multus/multus-dlvbf" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.871062 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cf2ef9c8-a4c9-4273-9236-43ea37f1e277-var-lib-openvswitch\") pod \"ovnkube-node-wr4x4\" (UID: \"cf2ef9c8-a4c9-4273-9236-43ea37f1e277\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr4x4" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.871331 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cf2ef9c8-a4c9-4273-9236-43ea37f1e277-host-cni-netd\") pod \"ovnkube-node-wr4x4\" (UID: \"cf2ef9c8-a4c9-4273-9236-43ea37f1e277\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr4x4" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.871344 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/cf2ef9c8-a4c9-4273-9236-43ea37f1e277-systemd-units\") pod \"ovnkube-node-wr4x4\" (UID: \"cf2ef9c8-a4c9-4273-9236-43ea37f1e277\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr4x4" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.871352 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/a091751f-234c-43ee-8324-ebb98bb3ec36-hostroot\") pod \"multus-dlvbf\" (UID: \"a091751f-234c-43ee-8324-ebb98bb3ec36\") " pod="openshift-multus/multus-dlvbf" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.871367 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cf2ef9c8-a4c9-4273-9236-43ea37f1e277-run-openvswitch\") pod \"ovnkube-node-wr4x4\" (UID: \"cf2ef9c8-a4c9-4273-9236-43ea37f1e277\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr4x4" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.871360 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gnmrq\" (UniqueName: \"kubernetes.io/projected/cee6a3dc-47d4-4996-9c78-cb6c6b626d71-kube-api-access-gnmrq\") pod \"machine-config-daemon-8vxnt\" (UID: \"cee6a3dc-47d4-4996-9c78-cb6c6b626d71\") " pod="openshift-machine-config-operator/machine-config-daemon-8vxnt" Dec 08 17:44:01 crc kubenswrapper[5118]: E1208 17:44:01.871404 5118 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.871427 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/ca4a524a-a1cb-4e10-8765-aa38225d2de3-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-lq9nf\" (UID: \"ca4a524a-a1cb-4e10-8765-aa38225d2de3\") " pod="openshift-multus/multus-additional-cni-plugins-lq9nf" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.871460 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a091751f-234c-43ee-8324-ebb98bb3ec36-multus-conf-dir\") pod \"multus-dlvbf\" (UID: \"a091751f-234c-43ee-8324-ebb98bb3ec36\") " pod="openshift-multus/multus-dlvbf" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.871481 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ca4a524a-a1cb-4e10-8765-aa38225d2de3-cnibin\") pod \"multus-additional-cni-plugins-lq9nf\" (UID: \"ca4a524a-a1cb-4e10-8765-aa38225d2de3\") " pod="openshift-multus/multus-additional-cni-plugins-lq9nf" Dec 08 17:44:01 crc kubenswrapper[5118]: E1208 17:44:01.871483 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e666ddb1-3625-4468-9d05-21215b5041c1-metrics-certs podName:e666ddb1-3625-4468-9d05-21215b5041c1 nodeName:}" failed. No retries permitted until 2025-12-08 17:44:02.371468873 +0000 UTC m=+99.272792967 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e666ddb1-3625-4468-9d05-21215b5041c1-metrics-certs") pod "network-metrics-daemon-54w78" (UID: "e666ddb1-3625-4468-9d05-21215b5041c1") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.871525 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/cf2ef9c8-a4c9-4273-9236-43ea37f1e277-run-ovn\") pod \"ovnkube-node-wr4x4\" (UID: \"cf2ef9c8-a4c9-4273-9236-43ea37f1e277\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr4x4" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.871490 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a091751f-234c-43ee-8324-ebb98bb3ec36-multus-conf-dir\") pod \"multus-dlvbf\" (UID: \"a091751f-234c-43ee-8324-ebb98bb3ec36\") " pod="openshift-multus/multus-dlvbf" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.871560 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a091751f-234c-43ee-8324-ebb98bb3ec36-etc-kubernetes\") pod \"multus-dlvbf\" (UID: \"a091751f-234c-43ee-8324-ebb98bb3ec36\") " pod="openshift-multus/multus-dlvbf" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.871578 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/94c49b3d-dce8-4a73-895a-32a521a06b22-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-x68jp\" (UID: \"94c49b3d-dce8-4a73-895a-32a521a06b22\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-x68jp" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.871594 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/a091751f-234c-43ee-8324-ebb98bb3ec36-cnibin\") pod \"multus-dlvbf\" (UID: \"a091751f-234c-43ee-8324-ebb98bb3ec36\") " pod="openshift-multus/multus-dlvbf" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.871609 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/b10e1655-f317-439b-8188-cbfbebc4d756-hosts-file\") pod \"node-resolver-vk6p6\" (UID: \"b10e1655-f317-439b-8188-cbfbebc4d756\") " pod="openshift-dns/node-resolver-vk6p6" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.871623 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/cf2ef9c8-a4c9-4273-9236-43ea37f1e277-log-socket\") pod \"ovnkube-node-wr4x4\" (UID: \"cf2ef9c8-a4c9-4273-9236-43ea37f1e277\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr4x4" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.871637 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cf2ef9c8-a4c9-4273-9236-43ea37f1e277-host-cni-bin\") pod \"ovnkube-node-wr4x4\" (UID: \"cf2ef9c8-a4c9-4273-9236-43ea37f1e277\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr4x4" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.871689 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cf2ef9c8-a4c9-4273-9236-43ea37f1e277-host-run-ovn-kubernetes\") pod \"ovnkube-node-wr4x4\" (UID: \"cf2ef9c8-a4c9-4273-9236-43ea37f1e277\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr4x4" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.871785 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/a091751f-234c-43ee-8324-ebb98bb3ec36-host-run-multus-certs\") pod \"multus-dlvbf\" (UID: \"a091751f-234c-43ee-8324-ebb98bb3ec36\") " pod="openshift-multus/multus-dlvbf" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.871910 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5a1c0a2f-d8ef-48d5-90d0-9d8fb12e8a00-serviceca\") pod \"node-ca-pvtml\" (UID: \"5a1c0a2f-d8ef-48d5-90d0-9d8fb12e8a00\") " pod="openshift-image-registry/node-ca-pvtml" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.872102 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/ca4a524a-a1cb-4e10-8765-aa38225d2de3-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-lq9nf\" (UID: \"ca4a524a-a1cb-4e10-8765-aa38225d2de3\") " pod="openshift-multus/multus-additional-cni-plugins-lq9nf" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.872238 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/a091751f-234c-43ee-8324-ebb98bb3ec36-multus-daemon-config\") pod \"multus-dlvbf\" (UID: \"a091751f-234c-43ee-8324-ebb98bb3ec36\") " pod="openshift-multus/multus-dlvbf" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.872347 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/94c49b3d-dce8-4a73-895a-32a521a06b22-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-x68jp\" (UID: \"94c49b3d-dce8-4a73-895a-32a521a06b22\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-x68jp" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.872589 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/ca4a524a-a1cb-4e10-8765-aa38225d2de3-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-lq9nf\" (UID: \"ca4a524a-a1cb-4e10-8765-aa38225d2de3\") " pod="openshift-multus/multus-additional-cni-plugins-lq9nf" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.872752 5118 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.872768 5118 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.872778 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zsb9b\" (UniqueName: \"kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.872789 5118 reconciler_common.go:299] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.872798 5118 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.872826 5118 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.872835 5118 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.872843 5118 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.872852 5118 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.872861 5118 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.872870 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ws8zz\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.872903 5118 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.872912 5118 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.872921 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-w94wk\" (UniqueName: \"kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.872930 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qgrkj\" (UniqueName: \"kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.872939 5118 reconciler_common.go:299] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.872948 5118 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.872977 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wj4qr\" (UniqueName: \"kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.872987 5118 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.872995 5118 reconciler_common.go:299] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.873005 5118 reconciler_common.go:299] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.873013 5118 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.873022 5118 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.873030 5118 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.873059 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ptkcf\" (UniqueName: \"kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.873068 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hm9x7\" (UniqueName: \"kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.873077 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9z4sw\" (UniqueName: \"kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.873086 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5lcfw\" (UniqueName: \"kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.873094 5118 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.873103 5118 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.873141 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cf2ef9c8-a4c9-4273-9236-43ea37f1e277-env-overrides\") pod \"ovnkube-node-wr4x4\" (UID: \"cf2ef9c8-a4c9-4273-9236-43ea37f1e277\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr4x4" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.873111 5118 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.873183 5118 reconciler_common.go:299] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.873213 5118 reconciler_common.go:299] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.873223 5118 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.873232 5118 reconciler_common.go:299] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.873240 5118 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.873250 5118 reconciler_common.go:299] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.873258 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zth6t\" (UniqueName: \"kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.873267 5118 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.873317 5118 reconciler_common.go:299] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.873327 5118 reconciler_common.go:299] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.873335 5118 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.873345 5118 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.873357 5118 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.873366 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-94l9h\" (UniqueName: \"kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.873396 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hckvg\" (UniqueName: \"kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.873405 5118 reconciler_common.go:299] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.873415 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rzt4w\" (UniqueName: \"kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.873424 5118 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.873433 5118 reconciler_common.go:299] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.873441 5118 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.873454 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ca4a524a-a1cb-4e10-8765-aa38225d2de3-cni-binary-copy\") pod \"multus-additional-cni-plugins-lq9nf\" (UID: \"ca4a524a-a1cb-4e10-8765-aa38225d2de3\") " pod="openshift-multus/multus-additional-cni-plugins-lq9nf" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.873470 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mjwtd\" (UniqueName: \"kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.873517 5118 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.873536 5118 reconciler_common.go:299] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.873556 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4g8ts\" (UniqueName: \"kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.873575 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qqbfk\" (UniqueName: \"kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.873594 5118 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.873614 5118 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.873630 5118 reconciler_common.go:299] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.873648 5118 reconciler_common.go:299] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.873666 5118 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.873684 5118 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.873702 5118 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.873718 5118 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.873735 5118 reconciler_common.go:299] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.873755 5118 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.873772 5118 reconciler_common.go:299] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.873789 5118 reconciler_common.go:299] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.873806 5118 reconciler_common.go:299] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.873822 5118 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.873840 5118 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.873857 5118 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.873900 5118 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.873918 5118 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.873939 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pllx6\" (UniqueName: \"kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.873958 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xxfcv\" (UniqueName: \"kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.873975 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d4tqq\" (UniqueName: \"kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.873993 5118 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.874010 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8pskd\" (UniqueName: \"kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.874028 5118 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.874047 5118 reconciler_common.go:299] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.874066 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l9stx\" (UniqueName: \"kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.874085 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-26xrl\" (UniqueName: \"kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.874105 5118 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.874050 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/a091751f-234c-43ee-8324-ebb98bb3ec36-cnibin\") pod \"multus-dlvbf\" (UID: \"a091751f-234c-43ee-8324-ebb98bb3ec36\") " pod="openshift-multus/multus-dlvbf" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.874122 5118 reconciler_common.go:299] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.874158 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ftwb6\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.874171 5118 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.874183 5118 reconciler_common.go:299] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.874196 5118 reconciler_common.go:299] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.874207 5118 reconciler_common.go:299] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.874209 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ca4a524a-a1cb-4e10-8765-aa38225d2de3-os-release\") pod \"multus-additional-cni-plugins-lq9nf\" (UID: \"ca4a524a-a1cb-4e10-8765-aa38225d2de3\") " pod="openshift-multus/multus-additional-cni-plugins-lq9nf" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.874219 5118 reconciler_common.go:299] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.874257 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xfp5s\" (UniqueName: \"kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.874329 5118 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.874386 5118 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.874406 5118 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.874594 5118 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.874664 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6g4lr\" (UniqueName: \"kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.874687 5118 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.874707 5118 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.874724 5118 reconciler_common.go:299] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.874741 5118 reconciler_common.go:299] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.874764 5118 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.874784 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7jjkz\" (UniqueName: \"kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.874805 5118 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.874823 5118 reconciler_common.go:299] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.874843 5118 reconciler_common.go:299] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.874862 5118 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.874963 5118 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.878737 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/b10e1655-f317-439b-8188-cbfbebc4d756-hosts-file\") pod \"node-resolver-vk6p6\" (UID: \"b10e1655-f317-439b-8188-cbfbebc4d756\") " pod="openshift-dns/node-resolver-vk6p6" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.878789 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/cf2ef9c8-a4c9-4273-9236-43ea37f1e277-log-socket\") pod \"ovnkube-node-wr4x4\" (UID: \"cf2ef9c8-a4c9-4273-9236-43ea37f1e277\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr4x4" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.879697 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a091751f-234c-43ee-8324-ebb98bb3ec36-etc-kubernetes\") pod \"multus-dlvbf\" (UID: \"a091751f-234c-43ee-8324-ebb98bb3ec36\") " pod="openshift-multus/multus-dlvbf" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.879835 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cf2ef9c8-a4c9-4273-9236-43ea37f1e277-host-cni-bin\") pod \"ovnkube-node-wr4x4\" (UID: \"cf2ef9c8-a4c9-4273-9236-43ea37f1e277\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr4x4" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.880264 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/94c49b3d-dce8-4a73-895a-32a521a06b22-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-x68jp\" (UID: \"94c49b3d-dce8-4a73-895a-32a521a06b22\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-x68jp" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.874249 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cf2ef9c8-a4c9-4273-9236-43ea37f1e277-host-cni-netd\") pod \"ovnkube-node-wr4x4\" (UID: \"cf2ef9c8-a4c9-4273-9236-43ea37f1e277\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr4x4" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.881380 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/94c49b3d-dce8-4a73-895a-32a521a06b22-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-x68jp\" (UID: \"94c49b3d-dce8-4a73-895a-32a521a06b22\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-x68jp" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.884641 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.884678 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.884690 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.884707 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.884718 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:44:01Z","lastTransitionTime":"2025-12-08T17:44:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.887794 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-s7svb\" (UniqueName: \"kubernetes.io/projected/94c49b3d-dce8-4a73-895a-32a521a06b22-kube-api-access-s7svb\") pod \"ovnkube-control-plane-57b78d8988-x68jp\" (UID: \"94c49b3d-dce8-4a73-895a-32a521a06b22\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-x68jp" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.890122 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wbmqg\" (UniqueName: \"kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.890179 5118 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.890199 5118 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.890218 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tknt7\" (UniqueName: \"kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.890239 5118 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.890259 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-sbc2l\" (UniqueName: \"kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.890277 5118 reconciler_common.go:299] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.890300 5118 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.890317 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.890334 5118 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.890349 5118 reconciler_common.go:299] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.890367 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ks6v2\" (UniqueName: \"kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.890397 5118 reconciler_common.go:299] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.890418 5118 reconciler_common.go:299] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.890438 5118 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.890455 5118 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.890476 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cf2ef9c8-a4c9-4273-9236-43ea37f1e277-ovn-node-metrics-cert\") pod \"ovnkube-node-wr4x4\" (UID: \"cf2ef9c8-a4c9-4273-9236-43ea37f1e277\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr4x4" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.890269 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ckfdv\" (UniqueName: \"kubernetes.io/projected/ca4a524a-a1cb-4e10-8765-aa38225d2de3-kube-api-access-ckfdv\") pod \"multus-additional-cni-plugins-lq9nf\" (UID: \"ca4a524a-a1cb-4e10-8765-aa38225d2de3\") " pod="openshift-multus/multus-additional-cni-plugins-lq9nf" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.890482 5118 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.890531 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6rmnv\" (UniqueName: \"kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.890543 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ddlk9\" (UniqueName: \"kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.890554 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.890564 5118 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.890572 5118 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.890580 5118 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.890591 5118 reconciler_common.go:299] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.890600 5118 reconciler_common.go:299] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.890608 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pgx6b\" (UniqueName: \"kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.890617 5118 reconciler_common.go:299] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.890626 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tkdh6\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.893405 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-kg6bq\" (UniqueName: \"kubernetes.io/projected/5a1c0a2f-d8ef-48d5-90d0-9d8fb12e8a00-kube-api-access-kg6bq\") pod \"node-ca-pvtml\" (UID: \"5a1c0a2f-d8ef-48d5-90d0-9d8fb12e8a00\") " pod="openshift-image-registry/node-ca-pvtml" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.893531 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-fj2k8\" (UniqueName: \"kubernetes.io/projected/b10e1655-f317-439b-8188-cbfbebc4d756-kube-api-access-fj2k8\") pod \"node-resolver-vk6p6\" (UID: \"b10e1655-f317-439b-8188-cbfbebc4d756\") " pod="openshift-dns/node-resolver-vk6p6" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.895552 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ftnr\" (UniqueName: \"kubernetes.io/projected/cf2ef9c8-a4c9-4273-9236-43ea37f1e277-kube-api-access-2ftnr\") pod \"ovnkube-node-wr4x4\" (UID: \"cf2ef9c8-a4c9-4273-9236-43ea37f1e277\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr4x4" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.895682 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/cee6a3dc-47d4-4996-9c78-cb6c6b626d71-proxy-tls\") pod \"machine-config-daemon-8vxnt\" (UID: \"cee6a3dc-47d4-4996-9c78-cb6c6b626d71\") " pod="openshift-machine-config-operator/machine-config-daemon-8vxnt" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.895794 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gnmrq\" (UniqueName: \"kubernetes.io/projected/cee6a3dc-47d4-4996-9c78-cb6c6b626d71-kube-api-access-gnmrq\") pod \"machine-config-daemon-8vxnt\" (UID: \"cee6a3dc-47d4-4996-9c78-cb6c6b626d71\") " pod="openshift-machine-config-operator/machine-config-daemon-8vxnt" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.896501 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdncr\" (UniqueName: \"kubernetes.io/projected/a091751f-234c-43ee-8324-ebb98bb3ec36-kube-api-access-tdncr\") pod \"multus-dlvbf\" (UID: \"a091751f-234c-43ee-8324-ebb98bb3ec36\") " pod="openshift-multus/multus-dlvbf" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.897104 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-848dl\" (UniqueName: \"kubernetes.io/projected/e666ddb1-3625-4468-9d05-21215b5041c1-kube-api-access-848dl\") pod \"network-metrics-daemon-54w78\" (UID: \"e666ddb1-3625-4468-9d05-21215b5041c1\") " pod="openshift-multus/network-metrics-daemon-54w78" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.911925 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.922408 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 08 17:44:01 crc kubenswrapper[5118]: W1208 17:44:01.931866 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfc4541ce_7789_4670_bc75_5c2868e52ce0.slice/crio-b8cebb199e47de2db1b7ff4bdc33c117475ac636abcfae88785cc06b3f88721b WatchSource:0}: Error finding container b8cebb199e47de2db1b7ff4bdc33c117475ac636abcfae88785cc06b3f88721b: Status 404 returned error can't find the container with id b8cebb199e47de2db1b7ff4bdc33c117475ac636abcfae88785cc06b3f88721b Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.939235 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.953031 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-pvtml" Dec 08 17:44:01 crc kubenswrapper[5118]: W1208 17:44:01.953459 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod428b39f5_eb1c_4f65_b7a4_eeb6e84860cc.slice/crio-d0af98d6d904b2016ec9e77ea12081fbc8d551535a58aaebdc2377a52d680586 WatchSource:0}: Error finding container d0af98d6d904b2016ec9e77ea12081fbc8d551535a58aaebdc2377a52d680586: Status 404 returned error can't find the container with id d0af98d6d904b2016ec9e77ea12081fbc8d551535a58aaebdc2377a52d680586 Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.961355 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-8vxnt" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.969673 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-lq9nf" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.977702 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-dlvbf" Dec 08 17:44:01 crc kubenswrapper[5118]: W1208 17:44:01.979001 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcee6a3dc_47d4_4996_9c78_cb6c6b626d71.slice/crio-f7856fbd6fd9cb86d7b9aa0ef2d0e18553c16582337d2b3f1f9df9c2a04ee699 WatchSource:0}: Error finding container f7856fbd6fd9cb86d7b9aa0ef2d0e18553c16582337d2b3f1f9df9c2a04ee699: Status 404 returned error can't find the container with id f7856fbd6fd9cb86d7b9aa0ef2d0e18553c16582337d2b3f1f9df9c2a04ee699 Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.987949 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-vk6p6" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.989822 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.989911 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.989926 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.989941 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.989953 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:44:01Z","lastTransitionTime":"2025-12-08T17:44:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:44:01 crc kubenswrapper[5118]: W1208 17:44:01.991489 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podca4a524a_a1cb_4e10_8765_aa38225d2de3.slice/crio-cc6acf289c2e66d765afab5ed500b16ff0e3759d092be25613e7a6b581905c92 WatchSource:0}: Error finding container cc6acf289c2e66d765afab5ed500b16ff0e3759d092be25613e7a6b581905c92: Status 404 returned error can't find the container with id cc6acf289c2e66d765afab5ed500b16ff0e3759d092be25613e7a6b581905c92 Dec 08 17:44:01 crc kubenswrapper[5118]: I1208 17:44:01.997266 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-wr4x4" Dec 08 17:44:02 crc kubenswrapper[5118]: I1208 17:44:02.002207 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-x68jp" Dec 08 17:44:02 crc kubenswrapper[5118]: W1208 17:44:02.092125 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod94c49b3d_dce8_4a73_895a_32a521a06b22.slice/crio-01292c3808477622b154cb0dbf945d6f4ebae1e270172a575c142ec764f01ed1 WatchSource:0}: Error finding container 01292c3808477622b154cb0dbf945d6f4ebae1e270172a575c142ec764f01ed1: Status 404 returned error can't find the container with id 01292c3808477622b154cb0dbf945d6f4ebae1e270172a575c142ec764f01ed1 Dec 08 17:44:02 crc kubenswrapper[5118]: I1208 17:44:02.092348 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:44:02 crc kubenswrapper[5118]: I1208 17:44:02.092409 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:44:02 crc kubenswrapper[5118]: I1208 17:44:02.092423 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:44:02 crc kubenswrapper[5118]: I1208 17:44:02.092441 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:44:02 crc kubenswrapper[5118]: I1208 17:44:02.092472 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:44:02Z","lastTransitionTime":"2025-12-08T17:44:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:44:02 crc kubenswrapper[5118]: I1208 17:44:02.194989 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:44:02 crc kubenswrapper[5118]: I1208 17:44:02.195028 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:44:02 crc kubenswrapper[5118]: I1208 17:44:02.195041 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:44:02 crc kubenswrapper[5118]: I1208 17:44:02.195060 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:44:02 crc kubenswrapper[5118]: I1208 17:44:02.195071 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:44:02Z","lastTransitionTime":"2025-12-08T17:44:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:44:02 crc kubenswrapper[5118]: I1208 17:44:02.293504 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:02 crc kubenswrapper[5118]: I1208 17:44:02.293566 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 17:44:02 crc kubenswrapper[5118]: I1208 17:44:02.293589 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 17:44:02 crc kubenswrapper[5118]: I1208 17:44:02.293619 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 17:44:02 crc kubenswrapper[5118]: I1208 17:44:02.293637 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 17:44:02 crc kubenswrapper[5118]: E1208 17:44:02.293715 5118 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 08 17:44:02 crc kubenswrapper[5118]: E1208 17:44:02.293762 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-08 17:44:03.293749451 +0000 UTC m=+100.195073545 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 08 17:44:02 crc kubenswrapper[5118]: E1208 17:44:02.294117 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:03.294107041 +0000 UTC m=+100.195431135 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:02 crc kubenswrapper[5118]: E1208 17:44:02.294182 5118 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 08 17:44:02 crc kubenswrapper[5118]: E1208 17:44:02.294211 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-08 17:44:03.294203363 +0000 UTC m=+100.195527447 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 08 17:44:02 crc kubenswrapper[5118]: E1208 17:44:02.294284 5118 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 08 17:44:02 crc kubenswrapper[5118]: E1208 17:44:02.294294 5118 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 08 17:44:02 crc kubenswrapper[5118]: E1208 17:44:02.294303 5118 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 17:44:02 crc kubenswrapper[5118]: E1208 17:44:02.294326 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2025-12-08 17:44:03.294320306 +0000 UTC m=+100.195644400 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 17:44:02 crc kubenswrapper[5118]: E1208 17:44:02.294385 5118 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 08 17:44:02 crc kubenswrapper[5118]: E1208 17:44:02.294395 5118 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 08 17:44:02 crc kubenswrapper[5118]: E1208 17:44:02.294401 5118 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 17:44:02 crc kubenswrapper[5118]: E1208 17:44:02.294424 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2025-12-08 17:44:03.294415939 +0000 UTC m=+100.195740033 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 17:44:02 crc kubenswrapper[5118]: I1208 17:44:02.296486 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:44:02 crc kubenswrapper[5118]: I1208 17:44:02.296504 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:44:02 crc kubenswrapper[5118]: I1208 17:44:02.296513 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:44:02 crc kubenswrapper[5118]: I1208 17:44:02.296524 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:44:02 crc kubenswrapper[5118]: I1208 17:44:02.296533 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:44:02Z","lastTransitionTime":"2025-12-08T17:44:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:44:02 crc kubenswrapper[5118]: I1208 17:44:02.342367 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-vk6p6" event={"ID":"b10e1655-f317-439b-8188-cbfbebc4d756","Type":"ContainerStarted","Data":"07ad4c5353355d310b8b9b927fde247fecd3395624c619696fcf3ee7c81fa5eb"} Dec 08 17:44:02 crc kubenswrapper[5118]: I1208 17:44:02.344768 5118 generic.go:358] "Generic (PLEG): container finished" podID="cf2ef9c8-a4c9-4273-9236-43ea37f1e277" containerID="33d781beb8d8052347450f2300267e6ffbb6a918f9201e9b447140356c184fa3" exitCode=0 Dec 08 17:44:02 crc kubenswrapper[5118]: I1208 17:44:02.344814 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wr4x4" event={"ID":"cf2ef9c8-a4c9-4273-9236-43ea37f1e277","Type":"ContainerDied","Data":"33d781beb8d8052347450f2300267e6ffbb6a918f9201e9b447140356c184fa3"} Dec 08 17:44:02 crc kubenswrapper[5118]: I1208 17:44:02.344903 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wr4x4" event={"ID":"cf2ef9c8-a4c9-4273-9236-43ea37f1e277","Type":"ContainerStarted","Data":"0c6086c3886b43de86d23f8567df25b9e847eae885fa03059bea7c7f7ee6ad9a"} Dec 08 17:44:02 crc kubenswrapper[5118]: I1208 17:44:02.347047 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8vxnt" event={"ID":"cee6a3dc-47d4-4996-9c78-cb6c6b626d71","Type":"ContainerStarted","Data":"a7254713d13ebebce6db9aed958275b7b933454c400b6c7a8517c4ac9f0f39e8"} Dec 08 17:44:02 crc kubenswrapper[5118]: I1208 17:44:02.347080 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8vxnt" event={"ID":"cee6a3dc-47d4-4996-9c78-cb6c6b626d71","Type":"ContainerStarted","Data":"d85cd5695eb2bfdc0550d3965b70689a69b9c315b96786c2d8f2213d1fc4d407"} Dec 08 17:44:02 crc kubenswrapper[5118]: I1208 17:44:02.347094 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8vxnt" event={"ID":"cee6a3dc-47d4-4996-9c78-cb6c6b626d71","Type":"ContainerStarted","Data":"f7856fbd6fd9cb86d7b9aa0ef2d0e18553c16582337d2b3f1f9df9c2a04ee699"} Dec 08 17:44:02 crc kubenswrapper[5118]: I1208 17:44:02.354139 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"38df974c2cdbaeeb86a37e8a858733e694170546ad885aa0a7f0fe880e7e1554"} Dec 08 17:44:02 crc kubenswrapper[5118]: I1208 17:44:02.354174 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"074c78acd0e8961abbda7a3db7b7e8c4d85fac92e47e738912394041c17755b4"} Dec 08 17:44:02 crc kubenswrapper[5118]: I1208 17:44:02.354185 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"b8cebb199e47de2db1b7ff4bdc33c117475ac636abcfae88785cc06b3f88721b"} Dec 08 17:44:02 crc kubenswrapper[5118]: I1208 17:44:02.355663 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" event={"ID":"34177974-8d82-49d2-a763-391d0df3bbd8","Type":"ContainerStarted","Data":"efdf8bdcbe5a9df2e1b8038223d069c1a0bb94159b7c34eb8e7b004e818ab0c9"} Dec 08 17:44:02 crc kubenswrapper[5118]: I1208 17:44:02.355689 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" event={"ID":"34177974-8d82-49d2-a763-391d0df3bbd8","Type":"ContainerStarted","Data":"332d0ebd1cb188fead3676f4609b7952e26df9b9bed93f7751f301a81f16e188"} Dec 08 17:44:02 crc kubenswrapper[5118]: I1208 17:44:02.357253 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-x68jp" event={"ID":"94c49b3d-dce8-4a73-895a-32a521a06b22","Type":"ContainerStarted","Data":"01292c3808477622b154cb0dbf945d6f4ebae1e270172a575c142ec764f01ed1"} Dec 08 17:44:02 crc kubenswrapper[5118]: I1208 17:44:02.361964 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-dlvbf" event={"ID":"a091751f-234c-43ee-8324-ebb98bb3ec36","Type":"ContainerStarted","Data":"3ec5898360e3a8a8e2d6ba11a4a74a5c238597fccf0ae0ce228c5792483aee54"} Dec 08 17:44:02 crc kubenswrapper[5118]: I1208 17:44:02.361997 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-dlvbf" event={"ID":"a091751f-234c-43ee-8324-ebb98bb3ec36","Type":"ContainerStarted","Data":"b5753fcf51964054902e260d06e2aca87a737b900f58668b92d5c8feaa41640e"} Dec 08 17:44:02 crc kubenswrapper[5118]: I1208 17:44:02.370107 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lq9nf" event={"ID":"ca4a524a-a1cb-4e10-8765-aa38225d2de3","Type":"ContainerStarted","Data":"25d8e9fa80e171e5549dce1ead3dbe401293c069fb29e48d51f21e23d3bddfdf"} Dec 08 17:44:02 crc kubenswrapper[5118]: I1208 17:44:02.370154 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lq9nf" event={"ID":"ca4a524a-a1cb-4e10-8765-aa38225d2de3","Type":"ContainerStarted","Data":"cc6acf289c2e66d765afab5ed500b16ff0e3759d092be25613e7a6b581905c92"} Dec 08 17:44:02 crc kubenswrapper[5118]: I1208 17:44:02.376382 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-pvtml" event={"ID":"5a1c0a2f-d8ef-48d5-90d0-9d8fb12e8a00","Type":"ContainerStarted","Data":"848ce57c78376f83ecf7ef7dc28195361bec10020e87076c1eddc78699103a74"} Dec 08 17:44:02 crc kubenswrapper[5118]: I1208 17:44:02.376429 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-pvtml" event={"ID":"5a1c0a2f-d8ef-48d5-90d0-9d8fb12e8a00","Type":"ContainerStarted","Data":"aa7906810f9012af331236a6090c8c0c28b4bc86a012e78306277e863d4888ee"} Dec 08 17:44:02 crc kubenswrapper[5118]: I1208 17:44:02.377792 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" event={"ID":"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc","Type":"ContainerStarted","Data":"d0af98d6d904b2016ec9e77ea12081fbc8d551535a58aaebdc2377a52d680586"} Dec 08 17:44:02 crc kubenswrapper[5118]: I1208 17:44:02.384530 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=1.3845181069999999 podStartE2EDuration="1.384518107s" podCreationTimestamp="2025-12-08 17:44:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:44:02.383559371 +0000 UTC m=+99.284883485" watchObservedRunningTime="2025-12-08 17:44:02.384518107 +0000 UTC m=+99.285842201" Dec 08 17:44:02 crc kubenswrapper[5118]: I1208 17:44:02.394962 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e666ddb1-3625-4468-9d05-21215b5041c1-metrics-certs\") pod \"network-metrics-daemon-54w78\" (UID: \"e666ddb1-3625-4468-9d05-21215b5041c1\") " pod="openshift-multus/network-metrics-daemon-54w78" Dec 08 17:44:02 crc kubenswrapper[5118]: E1208 17:44:02.395719 5118 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 08 17:44:02 crc kubenswrapper[5118]: E1208 17:44:02.395784 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e666ddb1-3625-4468-9d05-21215b5041c1-metrics-certs podName:e666ddb1-3625-4468-9d05-21215b5041c1 nodeName:}" failed. No retries permitted until 2025-12-08 17:44:03.395767054 +0000 UTC m=+100.297091248 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e666ddb1-3625-4468-9d05-21215b5041c1-metrics-certs") pod "network-metrics-daemon-54w78" (UID: "e666ddb1-3625-4468-9d05-21215b5041c1") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 08 17:44:02 crc kubenswrapper[5118]: I1208 17:44:02.404070 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:44:02 crc kubenswrapper[5118]: I1208 17:44:02.404107 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:44:02 crc kubenswrapper[5118]: I1208 17:44:02.404118 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:44:02 crc kubenswrapper[5118]: I1208 17:44:02.404132 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:44:02 crc kubenswrapper[5118]: I1208 17:44:02.404144 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:44:02Z","lastTransitionTime":"2025-12-08T17:44:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:44:02 crc kubenswrapper[5118]: I1208 17:44:02.426905 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=1.426887943 podStartE2EDuration="1.426887943s" podCreationTimestamp="2025-12-08 17:44:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:44:02.40808403 +0000 UTC m=+99.309408124" watchObservedRunningTime="2025-12-08 17:44:02.426887943 +0000 UTC m=+99.328212037" Dec 08 17:44:02 crc kubenswrapper[5118]: I1208 17:44:02.507167 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:44:02 crc kubenswrapper[5118]: I1208 17:44:02.507215 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:44:02 crc kubenswrapper[5118]: I1208 17:44:02.507227 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:44:02 crc kubenswrapper[5118]: I1208 17:44:02.507244 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:44:02 crc kubenswrapper[5118]: I1208 17:44:02.507256 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:44:02Z","lastTransitionTime":"2025-12-08T17:44:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:44:02 crc kubenswrapper[5118]: I1208 17:44:02.508116 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=1.5081064180000001 podStartE2EDuration="1.508106418s" podCreationTimestamp="2025-12-08 17:44:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:44:02.491994029 +0000 UTC m=+99.393318123" watchObservedRunningTime="2025-12-08 17:44:02.508106418 +0000 UTC m=+99.409430512" Dec 08 17:44:02 crc kubenswrapper[5118]: I1208 17:44:02.508404 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=1.508399306 podStartE2EDuration="1.508399306s" podCreationTimestamp="2025-12-08 17:44:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:44:02.507491212 +0000 UTC m=+99.408815326" watchObservedRunningTime="2025-12-08 17:44:02.508399306 +0000 UTC m=+99.409723400" Dec 08 17:44:02 crc kubenswrapper[5118]: I1208 17:44:02.609421 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:44:02 crc kubenswrapper[5118]: I1208 17:44:02.609463 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:44:02 crc kubenswrapper[5118]: I1208 17:44:02.609475 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:44:02 crc kubenswrapper[5118]: I1208 17:44:02.609490 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:44:02 crc kubenswrapper[5118]: I1208 17:44:02.609501 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:44:02Z","lastTransitionTime":"2025-12-08T17:44:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:44:02 crc kubenswrapper[5118]: I1208 17:44:02.677287 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-8vxnt" podStartSLOduration=79.677270892 podStartE2EDuration="1m19.677270892s" podCreationTimestamp="2025-12-08 17:42:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:44:02.676216604 +0000 UTC m=+99.577540698" watchObservedRunningTime="2025-12-08 17:44:02.677270892 +0000 UTC m=+99.578594986" Dec 08 17:44:02 crc kubenswrapper[5118]: I1208 17:44:02.712166 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:44:02 crc kubenswrapper[5118]: I1208 17:44:02.712234 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:44:02 crc kubenswrapper[5118]: I1208 17:44:02.712244 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:44:02 crc kubenswrapper[5118]: I1208 17:44:02.712258 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:44:02 crc kubenswrapper[5118]: I1208 17:44:02.712267 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:44:02Z","lastTransitionTime":"2025-12-08T17:44:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:44:02 crc kubenswrapper[5118]: I1208 17:44:02.718373 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-dlvbf" podStartSLOduration=79.718355273 podStartE2EDuration="1m19.718355273s" podCreationTimestamp="2025-12-08 17:42:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:44:02.717801108 +0000 UTC m=+99.619125202" watchObservedRunningTime="2025-12-08 17:44:02.718355273 +0000 UTC m=+99.619679357" Dec 08 17:44:02 crc kubenswrapper[5118]: I1208 17:44:02.734132 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-pvtml" podStartSLOduration=79.734113323 podStartE2EDuration="1m19.734113323s" podCreationTimestamp="2025-12-08 17:42:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:44:02.733675611 +0000 UTC m=+99.634999705" watchObservedRunningTime="2025-12-08 17:44:02.734113323 +0000 UTC m=+99.635437427" Dec 08 17:44:02 crc kubenswrapper[5118]: I1208 17:44:02.814543 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:44:02 crc kubenswrapper[5118]: I1208 17:44:02.814581 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:44:02 crc kubenswrapper[5118]: I1208 17:44:02.814590 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:44:02 crc kubenswrapper[5118]: I1208 17:44:02.814604 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:44:02 crc kubenswrapper[5118]: I1208 17:44:02.814613 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:44:02Z","lastTransitionTime":"2025-12-08T17:44:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:44:02 crc kubenswrapper[5118]: I1208 17:44:02.916667 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:44:02 crc kubenswrapper[5118]: I1208 17:44:02.916723 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:44:02 crc kubenswrapper[5118]: I1208 17:44:02.916735 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:44:02 crc kubenswrapper[5118]: I1208 17:44:02.916753 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:44:02 crc kubenswrapper[5118]: I1208 17:44:02.916764 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:44:02Z","lastTransitionTime":"2025-12-08T17:44:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:44:03 crc kubenswrapper[5118]: I1208 17:44:03.019233 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:44:03 crc kubenswrapper[5118]: I1208 17:44:03.019294 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:44:03 crc kubenswrapper[5118]: I1208 17:44:03.019308 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:44:03 crc kubenswrapper[5118]: I1208 17:44:03.019327 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:44:03 crc kubenswrapper[5118]: I1208 17:44:03.019342 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:44:03Z","lastTransitionTime":"2025-12-08T17:44:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:44:03 crc kubenswrapper[5118]: I1208 17:44:03.121728 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:44:03 crc kubenswrapper[5118]: I1208 17:44:03.121775 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:44:03 crc kubenswrapper[5118]: I1208 17:44:03.121787 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:44:03 crc kubenswrapper[5118]: I1208 17:44:03.121803 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:44:03 crc kubenswrapper[5118]: I1208 17:44:03.121815 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:44:03Z","lastTransitionTime":"2025-12-08T17:44:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:44:03 crc kubenswrapper[5118]: I1208 17:44:03.224337 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:44:03 crc kubenswrapper[5118]: I1208 17:44:03.224382 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:44:03 crc kubenswrapper[5118]: I1208 17:44:03.224397 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:44:03 crc kubenswrapper[5118]: I1208 17:44:03.224412 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:44:03 crc kubenswrapper[5118]: I1208 17:44:03.224421 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:44:03Z","lastTransitionTime":"2025-12-08T17:44:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:44:03 crc kubenswrapper[5118]: I1208 17:44:03.309046 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:03 crc kubenswrapper[5118]: E1208 17:44:03.309270 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:05.309243091 +0000 UTC m=+102.210567185 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:03 crc kubenswrapper[5118]: I1208 17:44:03.309502 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 17:44:03 crc kubenswrapper[5118]: I1208 17:44:03.309552 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 17:44:03 crc kubenswrapper[5118]: E1208 17:44:03.309673 5118 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 08 17:44:03 crc kubenswrapper[5118]: E1208 17:44:03.309783 5118 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 08 17:44:03 crc kubenswrapper[5118]: E1208 17:44:03.309801 5118 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 08 17:44:03 crc kubenswrapper[5118]: E1208 17:44:03.309815 5118 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 17:44:03 crc kubenswrapper[5118]: I1208 17:44:03.309913 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 17:44:03 crc kubenswrapper[5118]: I1208 17:44:03.309948 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 17:44:03 crc kubenswrapper[5118]: E1208 17:44:03.310014 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-08 17:44:05.310000571 +0000 UTC m=+102.211324735 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 08 17:44:03 crc kubenswrapper[5118]: E1208 17:44:03.310031 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2025-12-08 17:44:05.310024142 +0000 UTC m=+102.211348236 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 17:44:03 crc kubenswrapper[5118]: E1208 17:44:03.310138 5118 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 08 17:44:03 crc kubenswrapper[5118]: E1208 17:44:03.310181 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-08 17:44:05.310172556 +0000 UTC m=+102.211496650 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 08 17:44:03 crc kubenswrapper[5118]: E1208 17:44:03.310183 5118 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 08 17:44:03 crc kubenswrapper[5118]: E1208 17:44:03.310216 5118 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 08 17:44:03 crc kubenswrapper[5118]: E1208 17:44:03.310233 5118 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 17:44:03 crc kubenswrapper[5118]: E1208 17:44:03.310334 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2025-12-08 17:44:05.31031387 +0000 UTC m=+102.211637964 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 17:44:03 crc kubenswrapper[5118]: I1208 17:44:03.328324 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:44:03 crc kubenswrapper[5118]: I1208 17:44:03.328357 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:44:03 crc kubenswrapper[5118]: I1208 17:44:03.328369 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:44:03 crc kubenswrapper[5118]: I1208 17:44:03.328384 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:44:03 crc kubenswrapper[5118]: I1208 17:44:03.328428 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:44:03Z","lastTransitionTime":"2025-12-08T17:44:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:44:03 crc kubenswrapper[5118]: I1208 17:44:03.402651 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wr4x4" event={"ID":"cf2ef9c8-a4c9-4273-9236-43ea37f1e277","Type":"ContainerStarted","Data":"90a647427f6adcc0a95e3b5049458fc47738fe641231fc03b40ccc3c95400786"} Dec 08 17:44:03 crc kubenswrapper[5118]: I1208 17:44:03.402693 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wr4x4" event={"ID":"cf2ef9c8-a4c9-4273-9236-43ea37f1e277","Type":"ContainerStarted","Data":"a115df047e5c59266fc13d616ad015150425f0493f706d59f2346747e8a779e4"} Dec 08 17:44:03 crc kubenswrapper[5118]: I1208 17:44:03.402702 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wr4x4" event={"ID":"cf2ef9c8-a4c9-4273-9236-43ea37f1e277","Type":"ContainerStarted","Data":"94a3e70f9f409de1b5a37032e8595edbddd745578a1ca5c5880941e50c94d100"} Dec 08 17:44:03 crc kubenswrapper[5118]: I1208 17:44:03.402710 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wr4x4" event={"ID":"cf2ef9c8-a4c9-4273-9236-43ea37f1e277","Type":"ContainerStarted","Data":"6c02426ea159bbe26cc8542b53a59e18f979d900d431361620bba4814036f665"} Dec 08 17:44:03 crc kubenswrapper[5118]: I1208 17:44:03.404245 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-x68jp" event={"ID":"94c49b3d-dce8-4a73-895a-32a521a06b22","Type":"ContainerStarted","Data":"7f99e1c04090370d3c6660cfd043ab828963f2dd41095e28df8b18683736ff50"} Dec 08 17:44:03 crc kubenswrapper[5118]: I1208 17:44:03.404271 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-x68jp" event={"ID":"94c49b3d-dce8-4a73-895a-32a521a06b22","Type":"ContainerStarted","Data":"a5876b9a1fb17784bd002aa72e8228c02623f890ab1a65b0b7bf1f47a9b4dad1"} Dec 08 17:44:03 crc kubenswrapper[5118]: I1208 17:44:03.405928 5118 generic.go:358] "Generic (PLEG): container finished" podID="ca4a524a-a1cb-4e10-8765-aa38225d2de3" containerID="25d8e9fa80e171e5549dce1ead3dbe401293c069fb29e48d51f21e23d3bddfdf" exitCode=0 Dec 08 17:44:03 crc kubenswrapper[5118]: I1208 17:44:03.405981 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lq9nf" event={"ID":"ca4a524a-a1cb-4e10-8765-aa38225d2de3","Type":"ContainerDied","Data":"25d8e9fa80e171e5549dce1ead3dbe401293c069fb29e48d51f21e23d3bddfdf"} Dec 08 17:44:03 crc kubenswrapper[5118]: I1208 17:44:03.407913 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-vk6p6" event={"ID":"b10e1655-f317-439b-8188-cbfbebc4d756","Type":"ContainerStarted","Data":"c000c618f13d76276f3ddf0934c3f0586361d8f290573fef3a5b94aeeb2773d3"} Dec 08 17:44:03 crc kubenswrapper[5118]: I1208 17:44:03.410491 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e666ddb1-3625-4468-9d05-21215b5041c1-metrics-certs\") pod \"network-metrics-daemon-54w78\" (UID: \"e666ddb1-3625-4468-9d05-21215b5041c1\") " pod="openshift-multus/network-metrics-daemon-54w78" Dec 08 17:44:03 crc kubenswrapper[5118]: E1208 17:44:03.410596 5118 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 08 17:44:03 crc kubenswrapper[5118]: E1208 17:44:03.410660 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e666ddb1-3625-4468-9d05-21215b5041c1-metrics-certs podName:e666ddb1-3625-4468-9d05-21215b5041c1 nodeName:}" failed. No retries permitted until 2025-12-08 17:44:05.410642627 +0000 UTC m=+102.311966721 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e666ddb1-3625-4468-9d05-21215b5041c1-metrics-certs") pod "network-metrics-daemon-54w78" (UID: "e666ddb1-3625-4468-9d05-21215b5041c1") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 08 17:44:03 crc kubenswrapper[5118]: I1208 17:44:03.420213 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-x68jp" podStartSLOduration=80.420197757 podStartE2EDuration="1m20.420197757s" podCreationTimestamp="2025-12-08 17:42:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:44:03.419524009 +0000 UTC m=+100.320848103" watchObservedRunningTime="2025-12-08 17:44:03.420197757 +0000 UTC m=+100.321521851" Dec 08 17:44:03 crc kubenswrapper[5118]: I1208 17:44:03.428994 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 17:44:03 crc kubenswrapper[5118]: E1208 17:44:03.429094 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 08 17:44:03 crc kubenswrapper[5118]: I1208 17:44:03.429412 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-54w78" Dec 08 17:44:03 crc kubenswrapper[5118]: E1208 17:44:03.429461 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-54w78" podUID="e666ddb1-3625-4468-9d05-21215b5041c1" Dec 08 17:44:03 crc kubenswrapper[5118]: I1208 17:44:03.429504 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 17:44:03 crc kubenswrapper[5118]: E1208 17:44:03.429545 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 08 17:44:03 crc kubenswrapper[5118]: I1208 17:44:03.429582 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 17:44:03 crc kubenswrapper[5118]: E1208 17:44:03.429619 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 08 17:44:03 crc kubenswrapper[5118]: I1208 17:44:03.433325 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:44:03 crc kubenswrapper[5118]: I1208 17:44:03.433372 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:44:03 crc kubenswrapper[5118]: I1208 17:44:03.433382 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:44:03 crc kubenswrapper[5118]: I1208 17:44:03.433404 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:44:03 crc kubenswrapper[5118]: I1208 17:44:03.433445 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:44:03Z","lastTransitionTime":"2025-12-08T17:44:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:44:03 crc kubenswrapper[5118]: I1208 17:44:03.434197 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01080b46-74f1-4191-8755-5152a57b3b25" path="/var/lib/kubelet/pods/01080b46-74f1-4191-8755-5152a57b3b25/volumes" Dec 08 17:44:03 crc kubenswrapper[5118]: I1208 17:44:03.435022 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09cfa50b-4138-4585-a53e-64dd3ab73335" path="/var/lib/kubelet/pods/09cfa50b-4138-4585-a53e-64dd3ab73335/volumes" Dec 08 17:44:03 crc kubenswrapper[5118]: I1208 17:44:03.436919 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" path="/var/lib/kubelet/pods/0dd0fbac-8c0d-4228-8faa-abbeedabf7db/volumes" Dec 08 17:44:03 crc kubenswrapper[5118]: I1208 17:44:03.438425 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0effdbcf-dd7d-404d-9d48-77536d665a5d" path="/var/lib/kubelet/pods/0effdbcf-dd7d-404d-9d48-77536d665a5d/volumes" Dec 08 17:44:03 crc kubenswrapper[5118]: I1208 17:44:03.440562 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="149b3c48-e17c-4a66-a835-d86dabf6ff13" path="/var/lib/kubelet/pods/149b3c48-e17c-4a66-a835-d86dabf6ff13/volumes" Dec 08 17:44:03 crc kubenswrapper[5118]: I1208 17:44:03.443460 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16bdd140-dce1-464c-ab47-dd5798d1d256" path="/var/lib/kubelet/pods/16bdd140-dce1-464c-ab47-dd5798d1d256/volumes" Dec 08 17:44:03 crc kubenswrapper[5118]: I1208 17:44:03.444645 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18f80adb-c1c3-49ba-8ee4-932c851d3897" path="/var/lib/kubelet/pods/18f80adb-c1c3-49ba-8ee4-932c851d3897/volumes" Dec 08 17:44:03 crc kubenswrapper[5118]: I1208 17:44:03.446399 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" path="/var/lib/kubelet/pods/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e/volumes" Dec 08 17:44:03 crc kubenswrapper[5118]: I1208 17:44:03.447454 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2325ffef-9d5b-447f-b00e-3efc429acefe" path="/var/lib/kubelet/pods/2325ffef-9d5b-447f-b00e-3efc429acefe/volumes" Dec 08 17:44:03 crc kubenswrapper[5118]: I1208 17:44:03.449268 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="301e1965-1754-483d-b6cc-bfae7038bbca" path="/var/lib/kubelet/pods/301e1965-1754-483d-b6cc-bfae7038bbca/volumes" Dec 08 17:44:03 crc kubenswrapper[5118]: I1208 17:44:03.451047 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31fa8943-81cc-4750-a0b7-0fa9ab5af883" path="/var/lib/kubelet/pods/31fa8943-81cc-4750-a0b7-0fa9ab5af883/volumes" Dec 08 17:44:03 crc kubenswrapper[5118]: I1208 17:44:03.452217 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42a11a02-47e1-488f-b270-2679d3298b0e" path="/var/lib/kubelet/pods/42a11a02-47e1-488f-b270-2679d3298b0e/volumes" Dec 08 17:44:03 crc kubenswrapper[5118]: I1208 17:44:03.453455 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="567683bd-0efc-4f21-b076-e28559628404" path="/var/lib/kubelet/pods/567683bd-0efc-4f21-b076-e28559628404/volumes" Dec 08 17:44:03 crc kubenswrapper[5118]: I1208 17:44:03.454929 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="584e1f4a-8205-47d7-8efb-3afc6017c4c9" path="/var/lib/kubelet/pods/584e1f4a-8205-47d7-8efb-3afc6017c4c9/volumes" Dec 08 17:44:03 crc kubenswrapper[5118]: I1208 17:44:03.455797 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="593a3561-7760-45c5-8f91-5aaef7475d0f" path="/var/lib/kubelet/pods/593a3561-7760-45c5-8f91-5aaef7475d0f/volumes" Dec 08 17:44:03 crc kubenswrapper[5118]: I1208 17:44:03.456540 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ebfebf6-3ecd-458e-943f-bb25b52e2718" path="/var/lib/kubelet/pods/5ebfebf6-3ecd-458e-943f-bb25b52e2718/volumes" Dec 08 17:44:03 crc kubenswrapper[5118]: I1208 17:44:03.458070 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6077b63e-53a2-4f96-9d56-1ce0324e4913" path="/var/lib/kubelet/pods/6077b63e-53a2-4f96-9d56-1ce0324e4913/volumes" Dec 08 17:44:03 crc kubenswrapper[5118]: I1208 17:44:03.459998 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" path="/var/lib/kubelet/pods/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca/volumes" Dec 08 17:44:03 crc kubenswrapper[5118]: I1208 17:44:03.461215 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6edfcf45-925b-4eff-b940-95b6fc0b85d4" path="/var/lib/kubelet/pods/6edfcf45-925b-4eff-b940-95b6fc0b85d4/volumes" Dec 08 17:44:03 crc kubenswrapper[5118]: I1208 17:44:03.465515 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ee8fbd3-1f81-4666-96da-5afc70819f1a" path="/var/lib/kubelet/pods/6ee8fbd3-1f81-4666-96da-5afc70819f1a/volumes" Dec 08 17:44:03 crc kubenswrapper[5118]: I1208 17:44:03.467645 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" path="/var/lib/kubelet/pods/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a/volumes" Dec 08 17:44:03 crc kubenswrapper[5118]: I1208 17:44:03.472039 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="736c54fe-349c-4bb9-870a-d1c1d1c03831" path="/var/lib/kubelet/pods/736c54fe-349c-4bb9-870a-d1c1d1c03831/volumes" Dec 08 17:44:03 crc kubenswrapper[5118]: I1208 17:44:03.476353 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7599e0b6-bddf-4def-b7f2-0b32206e8651" path="/var/lib/kubelet/pods/7599e0b6-bddf-4def-b7f2-0b32206e8651/volumes" Dec 08 17:44:03 crc kubenswrapper[5118]: I1208 17:44:03.486367 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7afa918d-be67-40a6-803c-d3b0ae99d815" path="/var/lib/kubelet/pods/7afa918d-be67-40a6-803c-d3b0ae99d815/volumes" Dec 08 17:44:03 crc kubenswrapper[5118]: I1208 17:44:03.487940 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7df94c10-441d-4386-93a6-6730fb7bcde0" path="/var/lib/kubelet/pods/7df94c10-441d-4386-93a6-6730fb7bcde0/volumes" Dec 08 17:44:03 crc kubenswrapper[5118]: I1208 17:44:03.490425 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" path="/var/lib/kubelet/pods/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a/volumes" Dec 08 17:44:03 crc kubenswrapper[5118]: I1208 17:44:03.504938 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81e39f7b-62e4-4fc9-992a-6535ce127a02" path="/var/lib/kubelet/pods/81e39f7b-62e4-4fc9-992a-6535ce127a02/volumes" Dec 08 17:44:03 crc kubenswrapper[5118]: I1208 17:44:03.506824 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="869851b9-7ffb-4af0-b166-1d8aa40a5f80" path="/var/lib/kubelet/pods/869851b9-7ffb-4af0-b166-1d8aa40a5f80/volumes" Dec 08 17:44:03 crc kubenswrapper[5118]: I1208 17:44:03.519619 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" path="/var/lib/kubelet/pods/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff/volumes" Dec 08 17:44:03 crc kubenswrapper[5118]: I1208 17:44:03.521067 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92dfbade-90b6-4169-8c07-72cff7f2c82b" path="/var/lib/kubelet/pods/92dfbade-90b6-4169-8c07-72cff7f2c82b/volumes" Dec 08 17:44:03 crc kubenswrapper[5118]: I1208 17:44:03.523252 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94a6e063-3d1a-4d44-875d-185291448c31" path="/var/lib/kubelet/pods/94a6e063-3d1a-4d44-875d-185291448c31/volumes" Dec 08 17:44:03 crc kubenswrapper[5118]: I1208 17:44:03.524589 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f71a554-e414-4bc3-96d2-674060397afe" path="/var/lib/kubelet/pods/9f71a554-e414-4bc3-96d2-674060397afe/volumes" Dec 08 17:44:03 crc kubenswrapper[5118]: I1208 17:44:03.526562 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a208c9c2-333b-4b4a-be0d-bc32ec38a821" path="/var/lib/kubelet/pods/a208c9c2-333b-4b4a-be0d-bc32ec38a821/volumes" Dec 08 17:44:03 crc kubenswrapper[5118]: I1208 17:44:03.527553 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" path="/var/lib/kubelet/pods/a52afe44-fb37-46ed-a1f8-bf39727a3cbe/volumes" Dec 08 17:44:03 crc kubenswrapper[5118]: I1208 17:44:03.528676 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a555ff2e-0be6-46d5-897d-863bb92ae2b3" path="/var/lib/kubelet/pods/a555ff2e-0be6-46d5-897d-863bb92ae2b3/volumes" Dec 08 17:44:03 crc kubenswrapper[5118]: I1208 17:44:03.529277 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7a88189-c967-4640-879e-27665747f20c" path="/var/lib/kubelet/pods/a7a88189-c967-4640-879e-27665747f20c/volumes" Dec 08 17:44:03 crc kubenswrapper[5118]: I1208 17:44:03.530472 5118 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="af33e427-6803-48c2-a76a-dd9deb7cbf9a" path="/var/lib/kubelet/pods/af33e427-6803-48c2-a76a-dd9deb7cbf9a/volume-subpaths/run-systemd/ovnkube-controller/6" Dec 08 17:44:03 crc kubenswrapper[5118]: I1208 17:44:03.530572 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af33e427-6803-48c2-a76a-dd9deb7cbf9a" path="/var/lib/kubelet/pods/af33e427-6803-48c2-a76a-dd9deb7cbf9a/volumes" Dec 08 17:44:03 crc kubenswrapper[5118]: I1208 17:44:03.533079 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af41de71-79cf-4590-bbe9-9e8b848862cb" path="/var/lib/kubelet/pods/af41de71-79cf-4590-bbe9-9e8b848862cb/volumes" Dec 08 17:44:03 crc kubenswrapper[5118]: I1208 17:44:03.534565 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" path="/var/lib/kubelet/pods/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a/volumes" Dec 08 17:44:03 crc kubenswrapper[5118]: I1208 17:44:03.535629 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4750666-1362-4001-abd0-6f89964cc621" path="/var/lib/kubelet/pods/b4750666-1362-4001-abd0-6f89964cc621/volumes" Dec 08 17:44:03 crc kubenswrapper[5118]: I1208 17:44:03.537991 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:44:03 crc kubenswrapper[5118]: I1208 17:44:03.538028 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:44:03 crc kubenswrapper[5118]: I1208 17:44:03.538050 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:44:03 crc kubenswrapper[5118]: I1208 17:44:03.538068 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:44:03 crc kubenswrapper[5118]: I1208 17:44:03.538080 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:44:03Z","lastTransitionTime":"2025-12-08T17:44:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:44:03 crc kubenswrapper[5118]: I1208 17:44:03.539714 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b605f283-6f2e-42da-a838-54421690f7d0" path="/var/lib/kubelet/pods/b605f283-6f2e-42da-a838-54421690f7d0/volumes" Dec 08 17:44:03 crc kubenswrapper[5118]: I1208 17:44:03.540673 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c491984c-7d4b-44aa-8c1e-d7974424fa47" path="/var/lib/kubelet/pods/c491984c-7d4b-44aa-8c1e-d7974424fa47/volumes" Dec 08 17:44:03 crc kubenswrapper[5118]: I1208 17:44:03.543528 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5f2bfad-70f6-4185-a3d9-81ce12720767" path="/var/lib/kubelet/pods/c5f2bfad-70f6-4185-a3d9-81ce12720767/volumes" Dec 08 17:44:03 crc kubenswrapper[5118]: I1208 17:44:03.544553 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc85e424-18b2-4924-920b-bd291a8c4b01" path="/var/lib/kubelet/pods/cc85e424-18b2-4924-920b-bd291a8c4b01/volumes" Dec 08 17:44:03 crc kubenswrapper[5118]: I1208 17:44:03.545262 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce090a97-9ab6-4c40-a719-64ff2acd9778" path="/var/lib/kubelet/pods/ce090a97-9ab6-4c40-a719-64ff2acd9778/volumes" Dec 08 17:44:03 crc kubenswrapper[5118]: I1208 17:44:03.546849 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d19cb085-0c5b-4810-b654-ce7923221d90" path="/var/lib/kubelet/pods/d19cb085-0c5b-4810-b654-ce7923221d90/volumes" Dec 08 17:44:03 crc kubenswrapper[5118]: I1208 17:44:03.551118 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" path="/var/lib/kubelet/pods/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7/volumes" Dec 08 17:44:03 crc kubenswrapper[5118]: I1208 17:44:03.552501 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d565531a-ff86-4608-9d19-767de01ac31b" path="/var/lib/kubelet/pods/d565531a-ff86-4608-9d19-767de01ac31b/volumes" Dec 08 17:44:03 crc kubenswrapper[5118]: I1208 17:44:03.553918 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7e8f42f-dc0e-424b-bb56-5ec849834888" path="/var/lib/kubelet/pods/d7e8f42f-dc0e-424b-bb56-5ec849834888/volumes" Dec 08 17:44:03 crc kubenswrapper[5118]: I1208 17:44:03.554946 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" path="/var/lib/kubelet/pods/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9/volumes" Dec 08 17:44:03 crc kubenswrapper[5118]: I1208 17:44:03.556372 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e093be35-bb62-4843-b2e8-094545761610" path="/var/lib/kubelet/pods/e093be35-bb62-4843-b2e8-094545761610/volumes" Dec 08 17:44:03 crc kubenswrapper[5118]: I1208 17:44:03.559799 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" path="/var/lib/kubelet/pods/e1d2a42d-af1d-4054-9618-ab545e0ed8b7/volumes" Dec 08 17:44:03 crc kubenswrapper[5118]: I1208 17:44:03.561980 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f559dfa3-3917-43a2-97f6-61ddfda10e93" path="/var/lib/kubelet/pods/f559dfa3-3917-43a2-97f6-61ddfda10e93/volumes" Dec 08 17:44:03 crc kubenswrapper[5118]: I1208 17:44:03.567623 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f65c0ac1-8bca-454d-a2e6-e35cb418beac" path="/var/lib/kubelet/pods/f65c0ac1-8bca-454d-a2e6-e35cb418beac/volumes" Dec 08 17:44:03 crc kubenswrapper[5118]: I1208 17:44:03.568735 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" path="/var/lib/kubelet/pods/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4/volumes" Dec 08 17:44:03 crc kubenswrapper[5118]: I1208 17:44:03.569535 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7e2c886-118e-43bb-bef1-c78134de392b" path="/var/lib/kubelet/pods/f7e2c886-118e-43bb-bef1-c78134de392b/volumes" Dec 08 17:44:03 crc kubenswrapper[5118]: I1208 17:44:03.570378 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" path="/var/lib/kubelet/pods/fc8db2c7-859d-47b3-a900-2bd0c0b2973b/volumes" Dec 08 17:44:03 crc kubenswrapper[5118]: I1208 17:44:03.640020 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:44:03 crc kubenswrapper[5118]: I1208 17:44:03.640060 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:44:03 crc kubenswrapper[5118]: I1208 17:44:03.640072 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:44:03 crc kubenswrapper[5118]: I1208 17:44:03.640088 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:44:03 crc kubenswrapper[5118]: I1208 17:44:03.640100 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:44:03Z","lastTransitionTime":"2025-12-08T17:44:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:44:03 crc kubenswrapper[5118]: I1208 17:44:03.741808 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:44:03 crc kubenswrapper[5118]: I1208 17:44:03.742103 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:44:03 crc kubenswrapper[5118]: I1208 17:44:03.742116 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:44:03 crc kubenswrapper[5118]: I1208 17:44:03.742132 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:44:03 crc kubenswrapper[5118]: I1208 17:44:03.742142 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:44:03Z","lastTransitionTime":"2025-12-08T17:44:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:44:03 crc kubenswrapper[5118]: I1208 17:44:03.845271 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:44:03 crc kubenswrapper[5118]: I1208 17:44:03.845310 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:44:03 crc kubenswrapper[5118]: I1208 17:44:03.845319 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:44:03 crc kubenswrapper[5118]: I1208 17:44:03.845332 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:44:03 crc kubenswrapper[5118]: I1208 17:44:03.845343 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:44:03Z","lastTransitionTime":"2025-12-08T17:44:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:44:03 crc kubenswrapper[5118]: I1208 17:44:03.948288 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:44:03 crc kubenswrapper[5118]: I1208 17:44:03.948352 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:44:03 crc kubenswrapper[5118]: I1208 17:44:03.948371 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:44:03 crc kubenswrapper[5118]: I1208 17:44:03.948396 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:44:03 crc kubenswrapper[5118]: I1208 17:44:03.948413 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:44:03Z","lastTransitionTime":"2025-12-08T17:44:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:44:04 crc kubenswrapper[5118]: I1208 17:44:04.051390 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:44:04 crc kubenswrapper[5118]: I1208 17:44:04.051447 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:44:04 crc kubenswrapper[5118]: I1208 17:44:04.051459 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:44:04 crc kubenswrapper[5118]: I1208 17:44:04.051480 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:44:04 crc kubenswrapper[5118]: I1208 17:44:04.051493 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:44:04Z","lastTransitionTime":"2025-12-08T17:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:44:04 crc kubenswrapper[5118]: I1208 17:44:04.153906 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:44:04 crc kubenswrapper[5118]: I1208 17:44:04.153961 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:44:04 crc kubenswrapper[5118]: I1208 17:44:04.153979 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:44:04 crc kubenswrapper[5118]: I1208 17:44:04.153999 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:44:04 crc kubenswrapper[5118]: I1208 17:44:04.154013 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:44:04Z","lastTransitionTime":"2025-12-08T17:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:44:04 crc kubenswrapper[5118]: I1208 17:44:04.255791 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:44:04 crc kubenswrapper[5118]: I1208 17:44:04.255856 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:44:04 crc kubenswrapper[5118]: I1208 17:44:04.255925 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:44:04 crc kubenswrapper[5118]: I1208 17:44:04.255955 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:44:04 crc kubenswrapper[5118]: I1208 17:44:04.255974 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:44:04Z","lastTransitionTime":"2025-12-08T17:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:44:04 crc kubenswrapper[5118]: I1208 17:44:04.358636 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:44:04 crc kubenswrapper[5118]: I1208 17:44:04.358719 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:44:04 crc kubenswrapper[5118]: I1208 17:44:04.358745 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:44:04 crc kubenswrapper[5118]: I1208 17:44:04.358774 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:44:04 crc kubenswrapper[5118]: I1208 17:44:04.358795 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:44:04Z","lastTransitionTime":"2025-12-08T17:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:44:04 crc kubenswrapper[5118]: I1208 17:44:04.416771 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wr4x4" event={"ID":"cf2ef9c8-a4c9-4273-9236-43ea37f1e277","Type":"ContainerStarted","Data":"2f02199ab806ead16e41b396d0822ce0742a12d090da6ddeafad84473f0df5b1"} Dec 08 17:44:04 crc kubenswrapper[5118]: I1208 17:44:04.416871 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wr4x4" event={"ID":"cf2ef9c8-a4c9-4273-9236-43ea37f1e277","Type":"ContainerStarted","Data":"3bff84e1b7624dd37333ecc0931e980c01f0213323f41dbfb457726159b771a7"} Dec 08 17:44:04 crc kubenswrapper[5118]: I1208 17:44:04.419009 5118 generic.go:358] "Generic (PLEG): container finished" podID="ca4a524a-a1cb-4e10-8765-aa38225d2de3" containerID="9729ebf1bded7c306f93791c6a5f4e40809eb4b143d79a3c456d59bc41d5671d" exitCode=0 Dec 08 17:44:04 crc kubenswrapper[5118]: I1208 17:44:04.419828 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lq9nf" event={"ID":"ca4a524a-a1cb-4e10-8765-aa38225d2de3","Type":"ContainerDied","Data":"9729ebf1bded7c306f93791c6a5f4e40809eb4b143d79a3c456d59bc41d5671d"} Dec 08 17:44:04 crc kubenswrapper[5118]: I1208 17:44:04.454083 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-vk6p6" podStartSLOduration=81.454058898 podStartE2EDuration="1m21.454058898s" podCreationTimestamp="2025-12-08 17:42:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:44:03.478731254 +0000 UTC m=+100.380055348" watchObservedRunningTime="2025-12-08 17:44:04.454058898 +0000 UTC m=+101.355383022" Dec 08 17:44:04 crc kubenswrapper[5118]: I1208 17:44:04.460858 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:44:04 crc kubenswrapper[5118]: I1208 17:44:04.460962 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:44:04 crc kubenswrapper[5118]: I1208 17:44:04.460982 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:44:04 crc kubenswrapper[5118]: I1208 17:44:04.461007 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:44:04 crc kubenswrapper[5118]: I1208 17:44:04.461029 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:44:04Z","lastTransitionTime":"2025-12-08T17:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:44:04 crc kubenswrapper[5118]: I1208 17:44:04.563640 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:44:04 crc kubenswrapper[5118]: I1208 17:44:04.563686 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:44:04 crc kubenswrapper[5118]: I1208 17:44:04.563698 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:44:04 crc kubenswrapper[5118]: I1208 17:44:04.563714 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:44:04 crc kubenswrapper[5118]: I1208 17:44:04.563726 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:44:04Z","lastTransitionTime":"2025-12-08T17:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:44:04 crc kubenswrapper[5118]: I1208 17:44:04.666679 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:44:04 crc kubenswrapper[5118]: I1208 17:44:04.666733 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:44:04 crc kubenswrapper[5118]: I1208 17:44:04.666750 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:44:04 crc kubenswrapper[5118]: I1208 17:44:04.666768 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:44:04 crc kubenswrapper[5118]: I1208 17:44:04.666780 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:44:04Z","lastTransitionTime":"2025-12-08T17:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:44:04 crc kubenswrapper[5118]: I1208 17:44:04.771418 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:44:04 crc kubenswrapper[5118]: I1208 17:44:04.771462 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:44:04 crc kubenswrapper[5118]: I1208 17:44:04.771474 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:44:04 crc kubenswrapper[5118]: I1208 17:44:04.771490 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:44:04 crc kubenswrapper[5118]: I1208 17:44:04.771502 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:44:04Z","lastTransitionTime":"2025-12-08T17:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:44:04 crc kubenswrapper[5118]: I1208 17:44:04.874196 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:44:04 crc kubenswrapper[5118]: I1208 17:44:04.874250 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:44:04 crc kubenswrapper[5118]: I1208 17:44:04.874269 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:44:04 crc kubenswrapper[5118]: I1208 17:44:04.874294 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:44:04 crc kubenswrapper[5118]: I1208 17:44:04.874316 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:44:04Z","lastTransitionTime":"2025-12-08T17:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:44:04 crc kubenswrapper[5118]: I1208 17:44:04.976624 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:44:04 crc kubenswrapper[5118]: I1208 17:44:04.976673 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:44:04 crc kubenswrapper[5118]: I1208 17:44:04.976686 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:44:04 crc kubenswrapper[5118]: I1208 17:44:04.976702 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:44:04 crc kubenswrapper[5118]: I1208 17:44:04.976715 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:44:04Z","lastTransitionTime":"2025-12-08T17:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:44:05 crc kubenswrapper[5118]: I1208 17:44:05.078990 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:44:05 crc kubenswrapper[5118]: I1208 17:44:05.079129 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:44:05 crc kubenswrapper[5118]: I1208 17:44:05.079150 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:44:05 crc kubenswrapper[5118]: I1208 17:44:05.079173 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:44:05 crc kubenswrapper[5118]: I1208 17:44:05.079191 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:44:05Z","lastTransitionTime":"2025-12-08T17:44:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:44:05 crc kubenswrapper[5118]: I1208 17:44:05.181316 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:44:05 crc kubenswrapper[5118]: I1208 17:44:05.181385 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:44:05 crc kubenswrapper[5118]: I1208 17:44:05.181397 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:44:05 crc kubenswrapper[5118]: I1208 17:44:05.181412 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:44:05 crc kubenswrapper[5118]: I1208 17:44:05.181422 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:44:05Z","lastTransitionTime":"2025-12-08T17:44:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:44:05 crc kubenswrapper[5118]: I1208 17:44:05.283448 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:44:05 crc kubenswrapper[5118]: I1208 17:44:05.283519 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:44:05 crc kubenswrapper[5118]: I1208 17:44:05.283541 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:44:05 crc kubenswrapper[5118]: I1208 17:44:05.283567 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:44:05 crc kubenswrapper[5118]: I1208 17:44:05.283585 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:44:05Z","lastTransitionTime":"2025-12-08T17:44:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:44:05 crc kubenswrapper[5118]: I1208 17:44:05.337594 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:05 crc kubenswrapper[5118]: I1208 17:44:05.337828 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 17:44:05 crc kubenswrapper[5118]: I1208 17:44:05.337922 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 17:44:05 crc kubenswrapper[5118]: I1208 17:44:05.338006 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 17:44:05 crc kubenswrapper[5118]: I1208 17:44:05.338045 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 17:44:05 crc kubenswrapper[5118]: E1208 17:44:05.338125 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:09.338074241 +0000 UTC m=+106.239398375 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:05 crc kubenswrapper[5118]: E1208 17:44:05.338226 5118 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 08 17:44:05 crc kubenswrapper[5118]: E1208 17:44:05.338253 5118 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 08 17:44:05 crc kubenswrapper[5118]: E1208 17:44:05.338271 5118 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 17:44:05 crc kubenswrapper[5118]: E1208 17:44:05.338345 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2025-12-08 17:44:09.338323078 +0000 UTC m=+106.239647212 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 17:44:05 crc kubenswrapper[5118]: E1208 17:44:05.338422 5118 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 08 17:44:05 crc kubenswrapper[5118]: E1208 17:44:05.338431 5118 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 08 17:44:05 crc kubenswrapper[5118]: E1208 17:44:05.338466 5118 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 08 17:44:05 crc kubenswrapper[5118]: E1208 17:44:05.338486 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-08 17:44:09.338471752 +0000 UTC m=+106.239795856 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 08 17:44:05 crc kubenswrapper[5118]: E1208 17:44:05.338492 5118 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 17:44:05 crc kubenswrapper[5118]: E1208 17:44:05.338268 5118 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 08 17:44:05 crc kubenswrapper[5118]: E1208 17:44:05.338527 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-08 17:44:09.338518923 +0000 UTC m=+106.239843027 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 08 17:44:05 crc kubenswrapper[5118]: E1208 17:44:05.338578 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2025-12-08 17:44:09.338551424 +0000 UTC m=+106.239875568 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 17:44:05 crc kubenswrapper[5118]: I1208 17:44:05.385661 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:44:05 crc kubenswrapper[5118]: I1208 17:44:05.385719 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:44:05 crc kubenswrapper[5118]: I1208 17:44:05.385736 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:44:05 crc kubenswrapper[5118]: I1208 17:44:05.385755 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:44:05 crc kubenswrapper[5118]: I1208 17:44:05.385770 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:44:05Z","lastTransitionTime":"2025-12-08T17:44:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:44:05 crc kubenswrapper[5118]: I1208 17:44:05.427304 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 17:44:05 crc kubenswrapper[5118]: E1208 17:44:05.427459 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 08 17:44:05 crc kubenswrapper[5118]: I1208 17:44:05.427571 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-54w78" Dec 08 17:44:05 crc kubenswrapper[5118]: I1208 17:44:05.427614 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 17:44:05 crc kubenswrapper[5118]: I1208 17:44:05.427642 5118 generic.go:358] "Generic (PLEG): container finished" podID="ca4a524a-a1cb-4e10-8765-aa38225d2de3" containerID="cae1742b45d23c7cd960c8c320d8513c7b5a93201f5c33477a48f56da9b140fd" exitCode=0 Dec 08 17:44:05 crc kubenswrapper[5118]: E1208 17:44:05.427811 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-54w78" podUID="e666ddb1-3625-4468-9d05-21215b5041c1" Dec 08 17:44:05 crc kubenswrapper[5118]: E1208 17:44:05.428036 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 08 17:44:05 crc kubenswrapper[5118]: I1208 17:44:05.428288 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 17:44:05 crc kubenswrapper[5118]: E1208 17:44:05.428574 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 08 17:44:05 crc kubenswrapper[5118]: I1208 17:44:05.434082 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lq9nf" event={"ID":"ca4a524a-a1cb-4e10-8765-aa38225d2de3","Type":"ContainerDied","Data":"cae1742b45d23c7cd960c8c320d8513c7b5a93201f5c33477a48f56da9b140fd"} Dec 08 17:44:05 crc kubenswrapper[5118]: I1208 17:44:05.434132 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" event={"ID":"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc","Type":"ContainerStarted","Data":"327a69e92a5da721f07a5c09207ec4256425d2320c83a4c06dcd2d066e8fe028"} Dec 08 17:44:05 crc kubenswrapper[5118]: I1208 17:44:05.438932 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e666ddb1-3625-4468-9d05-21215b5041c1-metrics-certs\") pod \"network-metrics-daemon-54w78\" (UID: \"e666ddb1-3625-4468-9d05-21215b5041c1\") " pod="openshift-multus/network-metrics-daemon-54w78" Dec 08 17:44:05 crc kubenswrapper[5118]: E1208 17:44:05.439184 5118 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 08 17:44:05 crc kubenswrapper[5118]: E1208 17:44:05.439292 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e666ddb1-3625-4468-9d05-21215b5041c1-metrics-certs podName:e666ddb1-3625-4468-9d05-21215b5041c1 nodeName:}" failed. No retries permitted until 2025-12-08 17:44:09.439269061 +0000 UTC m=+106.340593175 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e666ddb1-3625-4468-9d05-21215b5041c1-metrics-certs") pod "network-metrics-daemon-54w78" (UID: "e666ddb1-3625-4468-9d05-21215b5041c1") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 08 17:44:05 crc kubenswrapper[5118]: I1208 17:44:05.492662 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:44:05 crc kubenswrapper[5118]: I1208 17:44:05.492731 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:44:05 crc kubenswrapper[5118]: I1208 17:44:05.492747 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:44:05 crc kubenswrapper[5118]: I1208 17:44:05.492765 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:44:05 crc kubenswrapper[5118]: I1208 17:44:05.492779 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:44:05Z","lastTransitionTime":"2025-12-08T17:44:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:44:05 crc kubenswrapper[5118]: I1208 17:44:05.595032 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:44:05 crc kubenswrapper[5118]: I1208 17:44:05.595095 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:44:05 crc kubenswrapper[5118]: I1208 17:44:05.595109 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:44:05 crc kubenswrapper[5118]: I1208 17:44:05.595130 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:44:05 crc kubenswrapper[5118]: I1208 17:44:05.595143 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:44:05Z","lastTransitionTime":"2025-12-08T17:44:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:44:05 crc kubenswrapper[5118]: I1208 17:44:05.697510 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:44:05 crc kubenswrapper[5118]: I1208 17:44:05.697555 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:44:05 crc kubenswrapper[5118]: I1208 17:44:05.697565 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:44:05 crc kubenswrapper[5118]: I1208 17:44:05.697586 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:44:05 crc kubenswrapper[5118]: I1208 17:44:05.697596 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:44:05Z","lastTransitionTime":"2025-12-08T17:44:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:44:05 crc kubenswrapper[5118]: I1208 17:44:05.799740 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:44:05 crc kubenswrapper[5118]: I1208 17:44:05.799782 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:44:05 crc kubenswrapper[5118]: I1208 17:44:05.799791 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:44:05 crc kubenswrapper[5118]: I1208 17:44:05.799804 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:44:05 crc kubenswrapper[5118]: I1208 17:44:05.799813 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:44:05Z","lastTransitionTime":"2025-12-08T17:44:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:44:05 crc kubenswrapper[5118]: I1208 17:44:05.901856 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:44:05 crc kubenswrapper[5118]: I1208 17:44:05.901915 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:44:05 crc kubenswrapper[5118]: I1208 17:44:05.901925 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:44:05 crc kubenswrapper[5118]: I1208 17:44:05.901941 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:44:05 crc kubenswrapper[5118]: I1208 17:44:05.901952 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:44:05Z","lastTransitionTime":"2025-12-08T17:44:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:44:06 crc kubenswrapper[5118]: I1208 17:44:06.004246 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:44:06 crc kubenswrapper[5118]: I1208 17:44:06.004319 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:44:06 crc kubenswrapper[5118]: I1208 17:44:06.004339 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:44:06 crc kubenswrapper[5118]: I1208 17:44:06.004361 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:44:06 crc kubenswrapper[5118]: I1208 17:44:06.004373 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:44:06Z","lastTransitionTime":"2025-12-08T17:44:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:44:06 crc kubenswrapper[5118]: I1208 17:44:06.107249 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:44:06 crc kubenswrapper[5118]: I1208 17:44:06.107332 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:44:06 crc kubenswrapper[5118]: I1208 17:44:06.107358 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:44:06 crc kubenswrapper[5118]: I1208 17:44:06.107388 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:44:06 crc kubenswrapper[5118]: I1208 17:44:06.107410 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:44:06Z","lastTransitionTime":"2025-12-08T17:44:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:44:06 crc kubenswrapper[5118]: I1208 17:44:06.210367 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:44:06 crc kubenswrapper[5118]: I1208 17:44:06.210454 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:44:06 crc kubenswrapper[5118]: I1208 17:44:06.210481 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:44:06 crc kubenswrapper[5118]: I1208 17:44:06.210527 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:44:06 crc kubenswrapper[5118]: I1208 17:44:06.210551 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:44:06Z","lastTransitionTime":"2025-12-08T17:44:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:44:06 crc kubenswrapper[5118]: I1208 17:44:06.313513 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:44:06 crc kubenswrapper[5118]: I1208 17:44:06.313566 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:44:06 crc kubenswrapper[5118]: I1208 17:44:06.313578 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:44:06 crc kubenswrapper[5118]: I1208 17:44:06.313596 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:44:06 crc kubenswrapper[5118]: I1208 17:44:06.313609 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:44:06Z","lastTransitionTime":"2025-12-08T17:44:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:44:06 crc kubenswrapper[5118]: I1208 17:44:06.416194 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:44:06 crc kubenswrapper[5118]: I1208 17:44:06.416274 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:44:06 crc kubenswrapper[5118]: I1208 17:44:06.416292 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:44:06 crc kubenswrapper[5118]: I1208 17:44:06.416319 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:44:06 crc kubenswrapper[5118]: I1208 17:44:06.416336 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:44:06Z","lastTransitionTime":"2025-12-08T17:44:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:44:06 crc kubenswrapper[5118]: I1208 17:44:06.439705 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wr4x4" event={"ID":"cf2ef9c8-a4c9-4273-9236-43ea37f1e277","Type":"ContainerStarted","Data":"3273cab25da83878f46c4a13c311a3c4fc6db1eebc9dad9ad1ad5373511dfee7"} Dec 08 17:44:06 crc kubenswrapper[5118]: I1208 17:44:06.443240 5118 generic.go:358] "Generic (PLEG): container finished" podID="ca4a524a-a1cb-4e10-8765-aa38225d2de3" containerID="90246cb702981cf2270d305ac50aab06f960ec41d7251eb20c534240e0a23ea2" exitCode=0 Dec 08 17:44:06 crc kubenswrapper[5118]: I1208 17:44:06.443333 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lq9nf" event={"ID":"ca4a524a-a1cb-4e10-8765-aa38225d2de3","Type":"ContainerDied","Data":"90246cb702981cf2270d305ac50aab06f960ec41d7251eb20c534240e0a23ea2"} Dec 08 17:44:06 crc kubenswrapper[5118]: I1208 17:44:06.525786 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:44:06 crc kubenswrapper[5118]: I1208 17:44:06.525835 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:44:06 crc kubenswrapper[5118]: I1208 17:44:06.525847 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:44:06 crc kubenswrapper[5118]: I1208 17:44:06.525863 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:44:06 crc kubenswrapper[5118]: I1208 17:44:06.525893 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:44:06Z","lastTransitionTime":"2025-12-08T17:44:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:44:06 crc kubenswrapper[5118]: I1208 17:44:06.627753 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:44:06 crc kubenswrapper[5118]: I1208 17:44:06.627810 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:44:06 crc kubenswrapper[5118]: I1208 17:44:06.627861 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:44:06 crc kubenswrapper[5118]: I1208 17:44:06.627912 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:44:06 crc kubenswrapper[5118]: I1208 17:44:06.627929 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:44:06Z","lastTransitionTime":"2025-12-08T17:44:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:44:06 crc kubenswrapper[5118]: I1208 17:44:06.730082 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:44:06 crc kubenswrapper[5118]: I1208 17:44:06.730128 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:44:06 crc kubenswrapper[5118]: I1208 17:44:06.730141 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:44:06 crc kubenswrapper[5118]: I1208 17:44:06.730159 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:44:06 crc kubenswrapper[5118]: I1208 17:44:06.730171 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:44:06Z","lastTransitionTime":"2025-12-08T17:44:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:44:06 crc kubenswrapper[5118]: I1208 17:44:06.832579 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:44:06 crc kubenswrapper[5118]: I1208 17:44:06.832644 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:44:06 crc kubenswrapper[5118]: I1208 17:44:06.832667 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:44:06 crc kubenswrapper[5118]: I1208 17:44:06.832693 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:44:06 crc kubenswrapper[5118]: I1208 17:44:06.832713 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:44:06Z","lastTransitionTime":"2025-12-08T17:44:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:44:06 crc kubenswrapper[5118]: I1208 17:44:06.935195 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:44:06 crc kubenswrapper[5118]: I1208 17:44:06.935279 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:44:06 crc kubenswrapper[5118]: I1208 17:44:06.935306 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:44:06 crc kubenswrapper[5118]: I1208 17:44:06.935338 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:44:06 crc kubenswrapper[5118]: I1208 17:44:06.935363 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:44:06Z","lastTransitionTime":"2025-12-08T17:44:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:44:07 crc kubenswrapper[5118]: I1208 17:44:07.036920 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:44:07 crc kubenswrapper[5118]: I1208 17:44:07.036965 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:44:07 crc kubenswrapper[5118]: I1208 17:44:07.036974 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:44:07 crc kubenswrapper[5118]: I1208 17:44:07.036989 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:44:07 crc kubenswrapper[5118]: I1208 17:44:07.036999 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:44:07Z","lastTransitionTime":"2025-12-08T17:44:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:44:07 crc kubenswrapper[5118]: I1208 17:44:07.138817 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:44:07 crc kubenswrapper[5118]: I1208 17:44:07.138853 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:44:07 crc kubenswrapper[5118]: I1208 17:44:07.138862 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:44:07 crc kubenswrapper[5118]: I1208 17:44:07.138892 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:44:07 crc kubenswrapper[5118]: I1208 17:44:07.138901 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:44:07Z","lastTransitionTime":"2025-12-08T17:44:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:44:07 crc kubenswrapper[5118]: I1208 17:44:07.240654 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:44:07 crc kubenswrapper[5118]: I1208 17:44:07.240709 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:44:07 crc kubenswrapper[5118]: I1208 17:44:07.240725 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:44:07 crc kubenswrapper[5118]: I1208 17:44:07.240745 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:44:07 crc kubenswrapper[5118]: I1208 17:44:07.240759 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:44:07Z","lastTransitionTime":"2025-12-08T17:44:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:44:07 crc kubenswrapper[5118]: I1208 17:44:07.343452 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:44:07 crc kubenswrapper[5118]: I1208 17:44:07.343511 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:44:07 crc kubenswrapper[5118]: I1208 17:44:07.343528 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:44:07 crc kubenswrapper[5118]: I1208 17:44:07.343551 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:44:07 crc kubenswrapper[5118]: I1208 17:44:07.343568 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:44:07Z","lastTransitionTime":"2025-12-08T17:44:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:44:07 crc kubenswrapper[5118]: I1208 17:44:07.433505 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 17:44:07 crc kubenswrapper[5118]: I1208 17:44:07.433535 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 17:44:07 crc kubenswrapper[5118]: I1208 17:44:07.433697 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-54w78" Dec 08 17:44:07 crc kubenswrapper[5118]: E1208 17:44:07.433697 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 08 17:44:07 crc kubenswrapper[5118]: I1208 17:44:07.434354 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 17:44:07 crc kubenswrapper[5118]: E1208 17:44:07.434469 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 08 17:44:07 crc kubenswrapper[5118]: E1208 17:44:07.434556 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 08 17:44:07 crc kubenswrapper[5118]: E1208 17:44:07.434304 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-54w78" podUID="e666ddb1-3625-4468-9d05-21215b5041c1" Dec 08 17:44:07 crc kubenswrapper[5118]: I1208 17:44:07.445776 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:44:07 crc kubenswrapper[5118]: I1208 17:44:07.445839 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:44:07 crc kubenswrapper[5118]: I1208 17:44:07.445857 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:44:07 crc kubenswrapper[5118]: I1208 17:44:07.445921 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:44:07 crc kubenswrapper[5118]: I1208 17:44:07.445953 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:44:07Z","lastTransitionTime":"2025-12-08T17:44:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:44:07 crc kubenswrapper[5118]: I1208 17:44:07.452710 5118 generic.go:358] "Generic (PLEG): container finished" podID="ca4a524a-a1cb-4e10-8765-aa38225d2de3" containerID="e3561302266441262557afe797d4289078e03dd3be0f98f4f98e497a23340136" exitCode=0 Dec 08 17:44:07 crc kubenswrapper[5118]: I1208 17:44:07.452838 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lq9nf" event={"ID":"ca4a524a-a1cb-4e10-8765-aa38225d2de3","Type":"ContainerDied","Data":"e3561302266441262557afe797d4289078e03dd3be0f98f4f98e497a23340136"} Dec 08 17:44:07 crc kubenswrapper[5118]: I1208 17:44:07.548256 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:44:07 crc kubenswrapper[5118]: I1208 17:44:07.548313 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:44:07 crc kubenswrapper[5118]: I1208 17:44:07.548328 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:44:07 crc kubenswrapper[5118]: I1208 17:44:07.548347 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:44:07 crc kubenswrapper[5118]: I1208 17:44:07.548363 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:44:07Z","lastTransitionTime":"2025-12-08T17:44:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:44:07 crc kubenswrapper[5118]: I1208 17:44:07.655536 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:44:07 crc kubenswrapper[5118]: I1208 17:44:07.655576 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:44:07 crc kubenswrapper[5118]: I1208 17:44:07.655588 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:44:07 crc kubenswrapper[5118]: I1208 17:44:07.655604 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:44:07 crc kubenswrapper[5118]: I1208 17:44:07.655615 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:44:07Z","lastTransitionTime":"2025-12-08T17:44:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:44:07 crc kubenswrapper[5118]: I1208 17:44:07.758218 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:44:07 crc kubenswrapper[5118]: I1208 17:44:07.758254 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:44:07 crc kubenswrapper[5118]: I1208 17:44:07.758264 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:44:07 crc kubenswrapper[5118]: I1208 17:44:07.758278 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:44:07 crc kubenswrapper[5118]: I1208 17:44:07.758288 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:44:07Z","lastTransitionTime":"2025-12-08T17:44:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:44:07 crc kubenswrapper[5118]: I1208 17:44:07.860619 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:44:07 crc kubenswrapper[5118]: I1208 17:44:07.860651 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:44:07 crc kubenswrapper[5118]: I1208 17:44:07.860659 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:44:07 crc kubenswrapper[5118]: I1208 17:44:07.860672 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:44:07 crc kubenswrapper[5118]: I1208 17:44:07.860681 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:44:07Z","lastTransitionTime":"2025-12-08T17:44:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:44:07 crc kubenswrapper[5118]: I1208 17:44:07.963812 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:44:07 crc kubenswrapper[5118]: I1208 17:44:07.963932 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:44:07 crc kubenswrapper[5118]: I1208 17:44:07.963960 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:44:07 crc kubenswrapper[5118]: I1208 17:44:07.963994 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:44:07 crc kubenswrapper[5118]: I1208 17:44:07.964017 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:44:07Z","lastTransitionTime":"2025-12-08T17:44:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:44:08 crc kubenswrapper[5118]: I1208 17:44:08.067935 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:44:08 crc kubenswrapper[5118]: I1208 17:44:08.068028 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:44:08 crc kubenswrapper[5118]: I1208 17:44:08.068055 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:44:08 crc kubenswrapper[5118]: I1208 17:44:08.068086 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:44:08 crc kubenswrapper[5118]: I1208 17:44:08.068110 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:44:08Z","lastTransitionTime":"2025-12-08T17:44:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:44:08 crc kubenswrapper[5118]: I1208 17:44:08.170961 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:44:08 crc kubenswrapper[5118]: I1208 17:44:08.171017 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:44:08 crc kubenswrapper[5118]: I1208 17:44:08.171036 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:44:08 crc kubenswrapper[5118]: I1208 17:44:08.171062 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:44:08 crc kubenswrapper[5118]: I1208 17:44:08.171080 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:44:08Z","lastTransitionTime":"2025-12-08T17:44:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:44:08 crc kubenswrapper[5118]: I1208 17:44:08.274416 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:44:08 crc kubenswrapper[5118]: I1208 17:44:08.274700 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:44:08 crc kubenswrapper[5118]: I1208 17:44:08.274979 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:44:08 crc kubenswrapper[5118]: I1208 17:44:08.275002 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:44:08 crc kubenswrapper[5118]: I1208 17:44:08.275015 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:44:08Z","lastTransitionTime":"2025-12-08T17:44:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:44:08 crc kubenswrapper[5118]: I1208 17:44:08.379227 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:44:08 crc kubenswrapper[5118]: I1208 17:44:08.379275 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:44:08 crc kubenswrapper[5118]: I1208 17:44:08.379292 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:44:08 crc kubenswrapper[5118]: I1208 17:44:08.379320 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:44:08 crc kubenswrapper[5118]: I1208 17:44:08.379338 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:44:08Z","lastTransitionTime":"2025-12-08T17:44:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:44:08 crc kubenswrapper[5118]: I1208 17:44:08.465151 5118 generic.go:358] "Generic (PLEG): container finished" podID="ca4a524a-a1cb-4e10-8765-aa38225d2de3" containerID="a68c27fe3e7606a525290a34887408d3ebf142180f880e1323ce5131dacf307a" exitCode=0 Dec 08 17:44:08 crc kubenswrapper[5118]: I1208 17:44:08.465232 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lq9nf" event={"ID":"ca4a524a-a1cb-4e10-8765-aa38225d2de3","Type":"ContainerDied","Data":"a68c27fe3e7606a525290a34887408d3ebf142180f880e1323ce5131dacf307a"} Dec 08 17:44:08 crc kubenswrapper[5118]: I1208 17:44:08.473184 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wr4x4" event={"ID":"cf2ef9c8-a4c9-4273-9236-43ea37f1e277","Type":"ContainerStarted","Data":"9ad93d634d87a7a7587fe23f469ba02f057130c392df5c1071a94fb11c762b67"} Dec 08 17:44:08 crc kubenswrapper[5118]: I1208 17:44:08.473840 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-wr4x4" Dec 08 17:44:08 crc kubenswrapper[5118]: I1208 17:44:08.473970 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-wr4x4" Dec 08 17:44:08 crc kubenswrapper[5118]: I1208 17:44:08.474061 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-wr4x4" Dec 08 17:44:08 crc kubenswrapper[5118]: I1208 17:44:08.481926 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:44:08 crc kubenswrapper[5118]: I1208 17:44:08.482003 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:44:08 crc kubenswrapper[5118]: I1208 17:44:08.482034 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:44:08 crc kubenswrapper[5118]: I1208 17:44:08.482064 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:44:08 crc kubenswrapper[5118]: I1208 17:44:08.482117 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:44:08Z","lastTransitionTime":"2025-12-08T17:44:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:44:08 crc kubenswrapper[5118]: I1208 17:44:08.512401 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-wr4x4" Dec 08 17:44:08 crc kubenswrapper[5118]: I1208 17:44:08.514789 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-wr4x4" Dec 08 17:44:08 crc kubenswrapper[5118]: I1208 17:44:08.526771 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-wr4x4" podStartSLOduration=85.526752229 podStartE2EDuration="1m25.526752229s" podCreationTimestamp="2025-12-08 17:42:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:44:08.526094291 +0000 UTC m=+105.427418425" watchObservedRunningTime="2025-12-08 17:44:08.526752229 +0000 UTC m=+105.428076363" Dec 08 17:44:08 crc kubenswrapper[5118]: I1208 17:44:08.584091 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:44:08 crc kubenswrapper[5118]: I1208 17:44:08.584132 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:44:08 crc kubenswrapper[5118]: I1208 17:44:08.584143 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:44:08 crc kubenswrapper[5118]: I1208 17:44:08.584158 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:44:08 crc kubenswrapper[5118]: I1208 17:44:08.584168 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:44:08Z","lastTransitionTime":"2025-12-08T17:44:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:44:08 crc kubenswrapper[5118]: I1208 17:44:08.686402 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:44:08 crc kubenswrapper[5118]: I1208 17:44:08.686484 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:44:08 crc kubenswrapper[5118]: I1208 17:44:08.686501 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:44:08 crc kubenswrapper[5118]: I1208 17:44:08.686527 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:44:08 crc kubenswrapper[5118]: I1208 17:44:08.686547 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:44:08Z","lastTransitionTime":"2025-12-08T17:44:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:44:08 crc kubenswrapper[5118]: I1208 17:44:08.791798 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:44:08 crc kubenswrapper[5118]: I1208 17:44:08.791854 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:44:08 crc kubenswrapper[5118]: I1208 17:44:08.791873 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:44:08 crc kubenswrapper[5118]: I1208 17:44:08.791937 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:44:08 crc kubenswrapper[5118]: I1208 17:44:08.791974 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:44:08Z","lastTransitionTime":"2025-12-08T17:44:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:44:08 crc kubenswrapper[5118]: I1208 17:44:08.894826 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:44:08 crc kubenswrapper[5118]: I1208 17:44:08.894931 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:44:08 crc kubenswrapper[5118]: I1208 17:44:08.894952 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:44:08 crc kubenswrapper[5118]: I1208 17:44:08.894979 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:44:08 crc kubenswrapper[5118]: I1208 17:44:08.894998 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:44:08Z","lastTransitionTime":"2025-12-08T17:44:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:44:08 crc kubenswrapper[5118]: I1208 17:44:08.997303 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:44:08 crc kubenswrapper[5118]: I1208 17:44:08.997385 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:44:08 crc kubenswrapper[5118]: I1208 17:44:08.997407 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:44:08 crc kubenswrapper[5118]: I1208 17:44:08.997437 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:44:08 crc kubenswrapper[5118]: I1208 17:44:08.997455 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:44:08Z","lastTransitionTime":"2025-12-08T17:44:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:44:09 crc kubenswrapper[5118]: I1208 17:44:09.100061 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:44:09 crc kubenswrapper[5118]: I1208 17:44:09.100134 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:44:09 crc kubenswrapper[5118]: I1208 17:44:09.100153 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:44:09 crc kubenswrapper[5118]: I1208 17:44:09.100177 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:44:09 crc kubenswrapper[5118]: I1208 17:44:09.100195 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:44:09Z","lastTransitionTime":"2025-12-08T17:44:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:44:09 crc kubenswrapper[5118]: I1208 17:44:09.205953 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:44:09 crc kubenswrapper[5118]: I1208 17:44:09.206016 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:44:09 crc kubenswrapper[5118]: I1208 17:44:09.206035 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:44:09 crc kubenswrapper[5118]: I1208 17:44:09.206059 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:44:09 crc kubenswrapper[5118]: I1208 17:44:09.206077 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:44:09Z","lastTransitionTime":"2025-12-08T17:44:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:44:09 crc kubenswrapper[5118]: I1208 17:44:09.309042 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:44:09 crc kubenswrapper[5118]: I1208 17:44:09.309102 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:44:09 crc kubenswrapper[5118]: I1208 17:44:09.309122 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:44:09 crc kubenswrapper[5118]: I1208 17:44:09.309146 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:44:09 crc kubenswrapper[5118]: I1208 17:44:09.309166 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:44:09Z","lastTransitionTime":"2025-12-08T17:44:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:44:09 crc kubenswrapper[5118]: I1208 17:44:09.384389 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:09 crc kubenswrapper[5118]: I1208 17:44:09.384577 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 17:44:09 crc kubenswrapper[5118]: E1208 17:44:09.384645 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:17.384613039 +0000 UTC m=+114.285937173 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:09 crc kubenswrapper[5118]: I1208 17:44:09.384707 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 17:44:09 crc kubenswrapper[5118]: E1208 17:44:09.384726 5118 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 08 17:44:09 crc kubenswrapper[5118]: E1208 17:44:09.384797 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-08 17:44:17.384779253 +0000 UTC m=+114.286103377 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 08 17:44:09 crc kubenswrapper[5118]: I1208 17:44:09.384825 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 17:44:09 crc kubenswrapper[5118]: I1208 17:44:09.384870 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 17:44:09 crc kubenswrapper[5118]: E1208 17:44:09.384946 5118 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 08 17:44:09 crc kubenswrapper[5118]: E1208 17:44:09.384980 5118 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 08 17:44:09 crc kubenswrapper[5118]: E1208 17:44:09.384993 5118 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 17:44:09 crc kubenswrapper[5118]: E1208 17:44:09.385065 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2025-12-08 17:44:17.385045921 +0000 UTC m=+114.286370015 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 17:44:09 crc kubenswrapper[5118]: E1208 17:44:09.385096 5118 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 08 17:44:09 crc kubenswrapper[5118]: E1208 17:44:09.385162 5118 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 08 17:44:09 crc kubenswrapper[5118]: E1208 17:44:09.385214 5118 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 08 17:44:09 crc kubenswrapper[5118]: E1208 17:44:09.385237 5118 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 17:44:09 crc kubenswrapper[5118]: E1208 17:44:09.385174 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-08 17:44:17.385152414 +0000 UTC m=+114.286476548 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 08 17:44:09 crc kubenswrapper[5118]: E1208 17:44:09.385433 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2025-12-08 17:44:17.38537375 +0000 UTC m=+114.286697884 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 17:44:09 crc kubenswrapper[5118]: I1208 17:44:09.412371 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:44:09 crc kubenswrapper[5118]: I1208 17:44:09.412426 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:44:09 crc kubenswrapper[5118]: I1208 17:44:09.412445 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:44:09 crc kubenswrapper[5118]: I1208 17:44:09.412472 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:44:09 crc kubenswrapper[5118]: I1208 17:44:09.412491 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:44:09Z","lastTransitionTime":"2025-12-08T17:44:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:44:09 crc kubenswrapper[5118]: I1208 17:44:09.426868 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-54w78" Dec 08 17:44:09 crc kubenswrapper[5118]: I1208 17:44:09.427015 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 17:44:09 crc kubenswrapper[5118]: I1208 17:44:09.427029 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 17:44:09 crc kubenswrapper[5118]: I1208 17:44:09.427158 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 17:44:09 crc kubenswrapper[5118]: E1208 17:44:09.427178 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 08 17:44:09 crc kubenswrapper[5118]: E1208 17:44:09.427270 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 08 17:44:09 crc kubenswrapper[5118]: E1208 17:44:09.427376 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 08 17:44:09 crc kubenswrapper[5118]: E1208 17:44:09.427518 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-54w78" podUID="e666ddb1-3625-4468-9d05-21215b5041c1" Dec 08 17:44:09 crc kubenswrapper[5118]: I1208 17:44:09.485700 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e666ddb1-3625-4468-9d05-21215b5041c1-metrics-certs\") pod \"network-metrics-daemon-54w78\" (UID: \"e666ddb1-3625-4468-9d05-21215b5041c1\") " pod="openshift-multus/network-metrics-daemon-54w78" Dec 08 17:44:09 crc kubenswrapper[5118]: E1208 17:44:09.485866 5118 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 08 17:44:09 crc kubenswrapper[5118]: E1208 17:44:09.485958 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e666ddb1-3625-4468-9d05-21215b5041c1-metrics-certs podName:e666ddb1-3625-4468-9d05-21215b5041c1 nodeName:}" failed. No retries permitted until 2025-12-08 17:44:17.485937593 +0000 UTC m=+114.387261707 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e666ddb1-3625-4468-9d05-21215b5041c1-metrics-certs") pod "network-metrics-daemon-54w78" (UID: "e666ddb1-3625-4468-9d05-21215b5041c1") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 08 17:44:09 crc kubenswrapper[5118]: I1208 17:44:09.488347 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lq9nf" event={"ID":"ca4a524a-a1cb-4e10-8765-aa38225d2de3","Type":"ContainerStarted","Data":"cbcc1281a478a888c3bd7fd7c7d8a4f67c4d33a6ce7c2f6466f21b5e40048ab0"} Dec 08 17:44:09 crc kubenswrapper[5118]: I1208 17:44:09.514895 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:44:09 crc kubenswrapper[5118]: I1208 17:44:09.514955 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:44:09 crc kubenswrapper[5118]: I1208 17:44:09.514970 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:44:09 crc kubenswrapper[5118]: I1208 17:44:09.514991 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:44:09 crc kubenswrapper[5118]: I1208 17:44:09.515008 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:44:09Z","lastTransitionTime":"2025-12-08T17:44:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:44:09 crc kubenswrapper[5118]: I1208 17:44:09.617263 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:44:09 crc kubenswrapper[5118]: I1208 17:44:09.617348 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:44:09 crc kubenswrapper[5118]: I1208 17:44:09.617375 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:44:09 crc kubenswrapper[5118]: I1208 17:44:09.617405 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:44:09 crc kubenswrapper[5118]: I1208 17:44:09.617428 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:44:09Z","lastTransitionTime":"2025-12-08T17:44:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:44:09 crc kubenswrapper[5118]: I1208 17:44:09.719844 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:44:09 crc kubenswrapper[5118]: I1208 17:44:09.719948 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:44:09 crc kubenswrapper[5118]: I1208 17:44:09.719976 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:44:09 crc kubenswrapper[5118]: I1208 17:44:09.720002 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:44:09 crc kubenswrapper[5118]: I1208 17:44:09.720020 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:44:09Z","lastTransitionTime":"2025-12-08T17:44:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:44:09 crc kubenswrapper[5118]: I1208 17:44:09.823004 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:44:09 crc kubenswrapper[5118]: I1208 17:44:09.823091 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:44:09 crc kubenswrapper[5118]: I1208 17:44:09.823118 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:44:09 crc kubenswrapper[5118]: I1208 17:44:09.823152 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:44:09 crc kubenswrapper[5118]: I1208 17:44:09.823179 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:44:09Z","lastTransitionTime":"2025-12-08T17:44:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:44:09 crc kubenswrapper[5118]: I1208 17:44:09.926828 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:44:09 crc kubenswrapper[5118]: I1208 17:44:09.926917 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:44:09 crc kubenswrapper[5118]: I1208 17:44:09.926939 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:44:09 crc kubenswrapper[5118]: I1208 17:44:09.926996 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:44:09 crc kubenswrapper[5118]: I1208 17:44:09.927016 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:44:09Z","lastTransitionTime":"2025-12-08T17:44:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:44:10 crc kubenswrapper[5118]: I1208 17:44:10.022053 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:44:10 crc kubenswrapper[5118]: I1208 17:44:10.022111 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:44:10 crc kubenswrapper[5118]: I1208 17:44:10.022125 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:44:10 crc kubenswrapper[5118]: I1208 17:44:10.022145 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:44:10 crc kubenswrapper[5118]: I1208 17:44:10.022158 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:44:10Z","lastTransitionTime":"2025-12-08T17:44:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:44:10 crc kubenswrapper[5118]: I1208 17:44:10.073031 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-lq9nf" podStartSLOduration=87.073008656 podStartE2EDuration="1m27.073008656s" podCreationTimestamp="2025-12-08 17:42:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:44:09.522217903 +0000 UTC m=+106.423542077" watchObservedRunningTime="2025-12-08 17:44:10.073008656 +0000 UTC m=+106.974332750" Dec 08 17:44:10 crc kubenswrapper[5118]: I1208 17:44:10.073852 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-7c9b9cfd6-sft9f"] Dec 08 17:44:10 crc kubenswrapper[5118]: I1208 17:44:10.097567 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-sft9f" Dec 08 17:44:10 crc kubenswrapper[5118]: I1208 17:44:10.101647 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"default-dockercfg-hqpm5\"" Dec 08 17:44:10 crc kubenswrapper[5118]: I1208 17:44:10.101922 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"kube-root-ca.crt\"" Dec 08 17:44:10 crc kubenswrapper[5118]: I1208 17:44:10.102432 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"openshift-service-ca.crt\"" Dec 08 17:44:10 crc kubenswrapper[5118]: I1208 17:44:10.102473 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"cluster-version-operator-serving-cert\"" Dec 08 17:44:10 crc kubenswrapper[5118]: I1208 17:44:10.192712 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/712f1b2c-7912-41b1-8c4e-737a0163088b-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-sft9f\" (UID: \"712f1b2c-7912-41b1-8c4e-737a0163088b\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-sft9f" Dec 08 17:44:10 crc kubenswrapper[5118]: I1208 17:44:10.192771 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/712f1b2c-7912-41b1-8c4e-737a0163088b-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-sft9f\" (UID: \"712f1b2c-7912-41b1-8c4e-737a0163088b\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-sft9f" Dec 08 17:44:10 crc kubenswrapper[5118]: I1208 17:44:10.192832 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/712f1b2c-7912-41b1-8c4e-737a0163088b-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-sft9f\" (UID: \"712f1b2c-7912-41b1-8c4e-737a0163088b\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-sft9f" Dec 08 17:44:10 crc kubenswrapper[5118]: I1208 17:44:10.192865 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/712f1b2c-7912-41b1-8c4e-737a0163088b-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-sft9f\" (UID: \"712f1b2c-7912-41b1-8c4e-737a0163088b\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-sft9f" Dec 08 17:44:10 crc kubenswrapper[5118]: I1208 17:44:10.192917 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/712f1b2c-7912-41b1-8c4e-737a0163088b-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-sft9f\" (UID: \"712f1b2c-7912-41b1-8c4e-737a0163088b\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-sft9f" Dec 08 17:44:10 crc kubenswrapper[5118]: I1208 17:44:10.293671 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/712f1b2c-7912-41b1-8c4e-737a0163088b-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-sft9f\" (UID: \"712f1b2c-7912-41b1-8c4e-737a0163088b\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-sft9f" Dec 08 17:44:10 crc kubenswrapper[5118]: I1208 17:44:10.293718 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/712f1b2c-7912-41b1-8c4e-737a0163088b-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-sft9f\" (UID: \"712f1b2c-7912-41b1-8c4e-737a0163088b\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-sft9f" Dec 08 17:44:10 crc kubenswrapper[5118]: I1208 17:44:10.293752 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/712f1b2c-7912-41b1-8c4e-737a0163088b-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-sft9f\" (UID: \"712f1b2c-7912-41b1-8c4e-737a0163088b\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-sft9f" Dec 08 17:44:10 crc kubenswrapper[5118]: I1208 17:44:10.293787 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/712f1b2c-7912-41b1-8c4e-737a0163088b-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-sft9f\" (UID: \"712f1b2c-7912-41b1-8c4e-737a0163088b\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-sft9f" Dec 08 17:44:10 crc kubenswrapper[5118]: I1208 17:44:10.293842 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/712f1b2c-7912-41b1-8c4e-737a0163088b-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-sft9f\" (UID: \"712f1b2c-7912-41b1-8c4e-737a0163088b\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-sft9f" Dec 08 17:44:10 crc kubenswrapper[5118]: I1208 17:44:10.293850 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/712f1b2c-7912-41b1-8c4e-737a0163088b-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-sft9f\" (UID: \"712f1b2c-7912-41b1-8c4e-737a0163088b\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-sft9f" Dec 08 17:44:10 crc kubenswrapper[5118]: I1208 17:44:10.293789 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/712f1b2c-7912-41b1-8c4e-737a0163088b-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-sft9f\" (UID: \"712f1b2c-7912-41b1-8c4e-737a0163088b\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-sft9f" Dec 08 17:44:10 crc kubenswrapper[5118]: I1208 17:44:10.294952 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/712f1b2c-7912-41b1-8c4e-737a0163088b-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-sft9f\" (UID: \"712f1b2c-7912-41b1-8c4e-737a0163088b\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-sft9f" Dec 08 17:44:10 crc kubenswrapper[5118]: I1208 17:44:10.308748 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/712f1b2c-7912-41b1-8c4e-737a0163088b-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-sft9f\" (UID: \"712f1b2c-7912-41b1-8c4e-737a0163088b\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-sft9f" Dec 08 17:44:10 crc kubenswrapper[5118]: I1208 17:44:10.309203 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/712f1b2c-7912-41b1-8c4e-737a0163088b-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-sft9f\" (UID: \"712f1b2c-7912-41b1-8c4e-737a0163088b\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-sft9f" Dec 08 17:44:10 crc kubenswrapper[5118]: I1208 17:44:10.339350 5118 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kubelet-serving" Dec 08 17:44:10 crc kubenswrapper[5118]: I1208 17:44:10.346642 5118 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Dec 08 17:44:10 crc kubenswrapper[5118]: I1208 17:44:10.410183 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-sft9f" Dec 08 17:44:10 crc kubenswrapper[5118]: I1208 17:44:10.485510 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-54w78"] Dec 08 17:44:10 crc kubenswrapper[5118]: I1208 17:44:10.485639 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-54w78" Dec 08 17:44:10 crc kubenswrapper[5118]: E1208 17:44:10.485741 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-54w78" podUID="e666ddb1-3625-4468-9d05-21215b5041c1" Dec 08 17:44:10 crc kubenswrapper[5118]: I1208 17:44:10.504952 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-sft9f" event={"ID":"712f1b2c-7912-41b1-8c4e-737a0163088b","Type":"ContainerStarted","Data":"cf24f2f1f23fd289ad4c655f62cb3428d12c231aa1dafd06331fee88c751d066"} Dec 08 17:44:11 crc kubenswrapper[5118]: I1208 17:44:11.433761 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 17:44:11 crc kubenswrapper[5118]: I1208 17:44:11.433779 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 17:44:11 crc kubenswrapper[5118]: I1208 17:44:11.433819 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 17:44:11 crc kubenswrapper[5118]: E1208 17:44:11.435582 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 08 17:44:11 crc kubenswrapper[5118]: E1208 17:44:11.435736 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 08 17:44:11 crc kubenswrapper[5118]: E1208 17:44:11.435938 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 08 17:44:11 crc kubenswrapper[5118]: I1208 17:44:11.508858 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-sft9f" event={"ID":"712f1b2c-7912-41b1-8c4e-737a0163088b","Type":"ContainerStarted","Data":"85193f73a09f3878e519a7d9ba859abf6d4baa4006caf5d96b87f776d8ceb014"} Dec 08 17:44:12 crc kubenswrapper[5118]: I1208 17:44:12.426749 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-54w78" Dec 08 17:44:12 crc kubenswrapper[5118]: E1208 17:44:12.427298 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-54w78" podUID="e666ddb1-3625-4468-9d05-21215b5041c1" Dec 08 17:44:13 crc kubenswrapper[5118]: I1208 17:44:13.430934 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 17:44:13 crc kubenswrapper[5118]: E1208 17:44:13.431054 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 08 17:44:13 crc kubenswrapper[5118]: I1208 17:44:13.431116 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 17:44:13 crc kubenswrapper[5118]: E1208 17:44:13.431177 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 08 17:44:13 crc kubenswrapper[5118]: I1208 17:44:13.431721 5118 scope.go:117] "RemoveContainer" containerID="09a55b9dd89de217aa828b7f964664fff12b69580598e02e122e83d05b141077" Dec 08 17:44:13 crc kubenswrapper[5118]: I1208 17:44:13.431843 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 17:44:13 crc kubenswrapper[5118]: E1208 17:44:13.432216 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 08 17:44:14 crc kubenswrapper[5118]: I1208 17:44:14.426518 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-54w78" Dec 08 17:44:14 crc kubenswrapper[5118]: E1208 17:44:14.426674 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-54w78" podUID="e666ddb1-3625-4468-9d05-21215b5041c1" Dec 08 17:44:14 crc kubenswrapper[5118]: I1208 17:44:14.525552 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Dec 08 17:44:14 crc kubenswrapper[5118]: I1208 17:44:14.526849 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"bc59dd30d995a3a8d9f31662930ff4062771ef2481abe3cf3883e943a04fb307"} Dec 08 17:44:14 crc kubenswrapper[5118]: I1208 17:44:14.527709 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:44:14 crc kubenswrapper[5118]: I1208 17:44:14.554318 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-sft9f" podStartSLOduration=91.554295382 podStartE2EDuration="1m31.554295382s" podCreationTimestamp="2025-12-08 17:42:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:44:11.52342206 +0000 UTC m=+108.424746154" watchObservedRunningTime="2025-12-08 17:44:14.554295382 +0000 UTC m=+111.455619486" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.429590 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.429590 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 17:44:15 crc kubenswrapper[5118]: E1208 17:44:15.429776 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 08 17:44:15 crc kubenswrapper[5118]: E1208 17:44:15.429692 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.429601 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 17:44:15 crc kubenswrapper[5118]: E1208 17:44:15.429869 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.682847 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeReady" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.683032 5118 kubelet_node_status.go:550] "Fast updating node status as it just became ready" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.710858 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=14.71084327 podStartE2EDuration="14.71084327s" podCreationTimestamp="2025-12-08 17:44:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:44:14.555908866 +0000 UTC m=+111.457232970" watchObservedRunningTime="2025-12-08 17:44:15.71084327 +0000 UTC m=+112.612167364" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.711957 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-8h8fl"] Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.714946 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-6wjgz"] Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.715167 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-9ddfb9f55-8h8fl" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.717137 5118 status_manager.go:895] "Failed to get status for pod" podUID="695dd41c-159e-4e22-98e5-e27fdf4296fd" pod="openshift-apiserver/apiserver-9ddfb9f55-8h8fl" err="pods \"apiserver-9ddfb9f55-8h8fl\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.717555 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"etcd-serving-ca\"" Dec 08 17:44:15 crc kubenswrapper[5118]: E1208 17:44:15.718844 5118 reflector.go:200] "Failed to watch" err="failed to list *v1.Secret: secrets \"serving-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" reflector="object-\"openshift-apiserver\"/\"serving-cert\"" type="*v1.Secret" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.719046 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"etcd-client\"" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.721443 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-q6lj7"] Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.721696 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-6wjgz" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.722893 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"audit-1\"" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.724986 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-qkg2q"] Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.725302 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-q6lj7" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.729562 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"config\"" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.729832 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"image-import-ca\"" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.730755 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"openshift-service-ca.crt\"" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.731038 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"encryption-config-1\"" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.731514 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-djmfg\"" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.731868 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"config\"" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.732271 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"kube-root-ca.crt\"" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.736147 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-rdv9c"] Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.736536 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-dockercfg-6c46w\"" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.737919 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-config\"" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.738028 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"openshift-apiserver-sa-dockercfg-4zqgh\"" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.738739 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-qkg2q" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.738950 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"client-ca\"" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.741097 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\"" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.742768 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-service-ca.crt\"" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.758137 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"kube-root-ca.crt\"" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.758279 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\"" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.758471 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"serving-cert\"" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.758628 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-serving-cert\"" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.761046 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-5httz"] Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.763241 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-global-ca\"" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.763414 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29420250-qhrfp"] Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.764019 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-8596bd845d-rdv9c" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.765083 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"trusted-ca-bundle\"" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.765811 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-bhk9x"] Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.765918 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-755bb95488-5httz" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.766238 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29420250-qhrfp" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.768271 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-4g75z"] Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.770290 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-v9sxk"] Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.770431 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-bhk9x" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.774607 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-p88k2"] Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.775437 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-4g75z" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.778527 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"serving-cert\"" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.779009 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-85wdh"] Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.779105 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-69db94689b-v9sxk" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.779361 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-mmcpt\"" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.779596 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"encryption-config-1\"" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.779809 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"kube-root-ca.crt\"" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.779964 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"config\"" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.780071 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"openshift-service-ca.crt\"" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.780230 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"client-ca\"" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.780593 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"audit-1\"" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.780711 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"openshift-service-ca.crt\"" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.780862 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"serving-cert\"" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.780990 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-client\"" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.781105 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"oauth-apiserver-sa-dockercfg-qqw4z\"" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.781183 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-85wdh" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.781211 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-serving-ca\"" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.781627 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-p88k2" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.783514 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-5pp5q"] Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.786158 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.786329 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-5pp5q" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.788153 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"openshift-service-ca.crt\"" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.788375 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-images\"" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.788641 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-config\"" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.788854 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"trusted-ca-bundle\"" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.788990 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"openshift-service-ca.crt\"" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.789150 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-root-ca.crt\"" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.789302 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-images\"" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.789435 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-dockercfg-bf7fj\"" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.789599 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-root-ca.crt\"" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.789796 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-serving-cert\"" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.790034 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"kube-root-ca.crt\"" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.790320 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-rbac-proxy\"" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.790462 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"kube-root-ca.crt\"" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.791447 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.791756 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-dockercfg-6n5ln\"" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.792154 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"pprof-cert\"" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.792359 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"kube-root-ca.crt\"" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.792620 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-dockercfg-2cfkp\"" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.792841 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-controller-dockercfg-xnj77\"" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.793038 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-tls\"" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.793164 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-dockercfg-sw6nc\"" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.793281 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-admission-controller-secret\"" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.793820 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-68cf44c8b8-rscz2"] Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.794639 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mcc-proxy-tls\"" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.794762 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mco-proxy-tls\"" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.794977 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ac-dockercfg-gj7jx\"" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.795145 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-metrics\"" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.795408 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"openshift-service-ca.crt\"" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.795448 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-root-ca.crt\"" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.795743 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-storage-version-migrator-sa-dockercfg-kknhg\"" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.795901 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"openshift-service-ca.crt\"" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.796319 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-dhfht"] Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.800109 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-bl822"] Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.801150 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-dhfht" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.819624 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-4kjg6"] Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.821504 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-tls\"" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.821771 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-68cf44c8b8-rscz2" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.822022 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-bl822" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.825094 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-dockercfg-gnx66\"" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.825439 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"kube-root-ca.crt\"" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.826190 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"marketplace-trusted-ca\"" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.839793 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"catalog-operator-serving-cert\"" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.839995 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"openshift-service-ca.crt\"" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.840124 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serviceaccount-dockercfg-4gqzj\"" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.840299 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-certs-default\"" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.840383 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-dockercfg-kw8fx\"" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.840992 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-stats-default\"" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.840994 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"service-ca-bundle\"" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.841021 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-metrics-certs-default\"" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.844324 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-6lgwk"] Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.846729 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-74545575db-d69qv"] Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.846846 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-4kjg6" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.846970 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-6lgwk" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.849092 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-cdz4v"] Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.851170 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-gvb6q"] Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.851892 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-cdz4v" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.854478 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-d8qsj"] Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.854648 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-74545575db-d69qv" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.857900 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-config\"" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.858067 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-ggh59"] Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.858111 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-dockercfg-tnfx9\"" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.858231 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"serving-cert\"" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.858367 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-gvb6q" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.859044 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-root-ca.crt\"" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.859342 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-config\"" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.859496 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"kube-root-ca.crt\"" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.859710 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"service-ca-dockercfg-bgxvm\"" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.859961 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"signing-key\"" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.860215 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-dockercfg-bjqfd\"" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.860402 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"openshift-service-ca.crt\"" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.860614 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-serving-cert\"" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.861999 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"signing-cabundle\"" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.862909 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-54c688565-487qx"] Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.863025 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-d8qsj" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.863771 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"packageserver-service-cert\"" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.864332 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-root-ca.crt\"" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.864639 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-dockercfg-2wbn2\"" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.865288 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-747b44746d-x7wvx"] Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.865391 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-ggh59" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.865764 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-54c688565-487qx" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.867378 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-rwgjl"] Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.869454 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-9b988"] Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.870199 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-747b44746d-x7wvx" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.871643 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-s6hn4"] Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.871728 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-rwgjl" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.872040 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-config\"" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.872089 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-scheduler-operator-serving-cert\"" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.872161 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/32bb589d-b6b8-4ab2-a9a2-5bae968bd2c6-tmp\") pod \"route-controller-manager-776cdc94d6-qkg2q\" (UID: \"32bb589d-b6b8-4ab2-a9a2-5bae968bd2c6\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-qkg2q" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.872193 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mb6n9\" (UniqueName: \"kubernetes.io/projected/2ecc2ce3-fe03-4f16-9dfd-4a8b1b2b224f-kube-api-access-mb6n9\") pod \"machine-config-operator-67c9d58cbb-4g75z\" (UID: \"2ecc2ce3-fe03-4f16-9dfd-4a8b1b2b224f\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-4g75z" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.872223 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8dcd2702-e20f-439b-b2c7-27095126b87e-serving-cert\") pod \"controller-manager-65b6cccf98-6wjgz\" (UID: \"8dcd2702-e20f-439b-b2c7-27095126b87e\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-6wjgz" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.872229 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"openshift-service-ca.crt\"" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.872245 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a749ad3-837c-4804-b23c-2abb017b5b82-config\") pod \"machine-api-operator-755bb95488-5httz\" (UID: \"1a749ad3-837c-4804-b23c-2abb017b5b82\") " pod="openshift-machine-api/machine-api-operator-755bb95488-5httz" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.872266 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wb2r\" (UniqueName: \"kubernetes.io/projected/837f85a8-fff5-46a0-b1d5-2d51271f415a-kube-api-access-8wb2r\") pod \"openshift-apiserver-operator-846cbfc458-q6lj7\" (UID: \"837f85a8-fff5-46a0-b1d5-2d51271f415a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-q6lj7" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.872066 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"kube-root-ca.crt\"" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.872298 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3a9ac21c-f3fb-42c7-a5ce-096d015b8d3c-trusted-ca-bundle\") pod \"apiserver-8596bd845d-rdv9c\" (UID: \"3a9ac21c-f3fb-42c7-a5ce-096d015b8d3c\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-rdv9c" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.872318 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/2ecc2ce3-fe03-4f16-9dfd-4a8b1b2b224f-images\") pod \"machine-config-operator-67c9d58cbb-4g75z\" (UID: \"2ecc2ce3-fe03-4f16-9dfd-4a8b1b2b224f\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-4g75z" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.872297 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-config\"" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.872432 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/837f85a8-fff5-46a0-b1d5-2d51271f415a-config\") pod \"openshift-apiserver-operator-846cbfc458-q6lj7\" (UID: \"837f85a8-fff5-46a0-b1d5-2d51271f415a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-q6lj7" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.872481 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3a9ac21c-f3fb-42c7-a5ce-096d015b8d3c-etcd-client\") pod \"apiserver-8596bd845d-rdv9c\" (UID: \"3a9ac21c-f3fb-42c7-a5ce-096d015b8d3c\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-rdv9c" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.872505 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a9ac21c-f3fb-42c7-a5ce-096d015b8d3c-audit-dir\") pod \"apiserver-8596bd845d-rdv9c\" (UID: \"3a9ac21c-f3fb-42c7-a5ce-096d015b8d3c\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-rdv9c" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.872551 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8dcd2702-e20f-439b-b2c7-27095126b87e-client-ca\") pod \"controller-manager-65b6cccf98-6wjgz\" (UID: \"8dcd2702-e20f-439b-b2c7-27095126b87e\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-6wjgz" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.872573 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serving-cert\"" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.872582 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/695dd41c-159e-4e22-98e5-e27fdf4296fd-etcd-client\") pod \"apiserver-9ddfb9f55-8h8fl\" (UID: \"695dd41c-159e-4e22-98e5-e27fdf4296fd\") " pod="openshift-apiserver/apiserver-9ddfb9f55-8h8fl" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.872628 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/695dd41c-159e-4e22-98e5-e27fdf4296fd-encryption-config\") pod \"apiserver-9ddfb9f55-8h8fl\" (UID: \"695dd41c-159e-4e22-98e5-e27fdf4296fd\") " pod="openshift-apiserver/apiserver-9ddfb9f55-8h8fl" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.872673 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2t9zh\" (UniqueName: \"kubernetes.io/projected/32bb589d-b6b8-4ab2-a9a2-5bae968bd2c6-kube-api-access-2t9zh\") pod \"route-controller-manager-776cdc94d6-qkg2q\" (UID: \"32bb589d-b6b8-4ab2-a9a2-5bae968bd2c6\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-qkg2q" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.872732 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/28b33fd8-46b7-46e9-bef9-ec6b3f035300-serving-cert\") pod \"kube-apiserver-operator-575994946d-bhk9x\" (UID: \"28b33fd8-46b7-46e9-bef9-ec6b3f035300\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-bhk9x" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.872755 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwvsf\" (UniqueName: \"kubernetes.io/projected/3a9ac21c-f3fb-42c7-a5ce-096d015b8d3c-kube-api-access-bwvsf\") pod \"apiserver-8596bd845d-rdv9c\" (UID: \"3a9ac21c-f3fb-42c7-a5ce-096d015b8d3c\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-rdv9c" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.872780 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"package-server-manager-serving-cert\"" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.872787 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-rbac-proxy\"" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.872816 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/695dd41c-159e-4e22-98e5-e27fdf4296fd-node-pullsecrets\") pod \"apiserver-9ddfb9f55-8h8fl\" (UID: \"695dd41c-159e-4e22-98e5-e27fdf4296fd\") " pod="openshift-apiserver/apiserver-9ddfb9f55-8h8fl" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.872836 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/3a9ac21c-f3fb-42c7-a5ce-096d015b8d3c-etcd-serving-ca\") pod \"apiserver-8596bd845d-rdv9c\" (UID: \"3a9ac21c-f3fb-42c7-a5ce-096d015b8d3c\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-rdv9c" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.872898 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/695dd41c-159e-4e22-98e5-e27fdf4296fd-config\") pod \"apiserver-9ddfb9f55-8h8fl\" (UID: \"695dd41c-159e-4e22-98e5-e27fdf4296fd\") " pod="openshift-apiserver/apiserver-9ddfb9f55-8h8fl" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.872917 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3a9ac21c-f3fb-42c7-a5ce-096d015b8d3c-audit-policies\") pod \"apiserver-8596bd845d-rdv9c\" (UID: \"3a9ac21c-f3fb-42c7-a5ce-096d015b8d3c\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-rdv9c" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.872952 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/695dd41c-159e-4e22-98e5-e27fdf4296fd-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-8h8fl\" (UID: \"695dd41c-159e-4e22-98e5-e27fdf4296fd\") " pod="openshift-apiserver/apiserver-9ddfb9f55-8h8fl" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.872973 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/695dd41c-159e-4e22-98e5-e27fdf4296fd-audit-dir\") pod \"apiserver-9ddfb9f55-8h8fl\" (UID: \"695dd41c-159e-4e22-98e5-e27fdf4296fd\") " pod="openshift-apiserver/apiserver-9ddfb9f55-8h8fl" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.872986 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nsfwp\" (UniqueName: \"kubernetes.io/projected/695dd41c-159e-4e22-98e5-e27fdf4296fd-kube-api-access-nsfwp\") pod \"apiserver-9ddfb9f55-8h8fl\" (UID: \"695dd41c-159e-4e22-98e5-e27fdf4296fd\") " pod="openshift-apiserver/apiserver-9ddfb9f55-8h8fl" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.873002 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkd6h\" (UniqueName: \"kubernetes.io/projected/8dcd2702-e20f-439b-b2c7-27095126b87e-kube-api-access-lkd6h\") pod \"controller-manager-65b6cccf98-6wjgz\" (UID: \"8dcd2702-e20f-439b-b2c7-27095126b87e\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-6wjgz" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.873062 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32bb589d-b6b8-4ab2-a9a2-5bae968bd2c6-config\") pod \"route-controller-manager-776cdc94d6-qkg2q\" (UID: \"32bb589d-b6b8-4ab2-a9a2-5bae968bd2c6\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-qkg2q" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.873089 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/3a9ac21c-f3fb-42c7-a5ce-096d015b8d3c-encryption-config\") pod \"apiserver-8596bd845d-rdv9c\" (UID: \"3a9ac21c-f3fb-42c7-a5ce-096d015b8d3c\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-rdv9c" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.873148 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f5c1e280-e9c9-4a30-bb13-023852fd940b-webhook-certs\") pod \"multus-admission-controller-69db94689b-v9sxk\" (UID: \"f5c1e280-e9c9-4a30-bb13-023852fd940b\") " pod="openshift-multus/multus-admission-controller-69db94689b-v9sxk" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.873169 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tcp6m\" (UniqueName: \"kubernetes.io/projected/742843af-c521-4d4a-beea-e6feae8140e1-kube-api-access-tcp6m\") pod \"collect-profiles-29420250-qhrfp\" (UID: \"742843af-c521-4d4a-beea-e6feae8140e1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420250-qhrfp" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.873188 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/32bb589d-b6b8-4ab2-a9a2-5bae968bd2c6-serving-cert\") pod \"route-controller-manager-776cdc94d6-qkg2q\" (UID: \"32bb589d-b6b8-4ab2-a9a2-5bae968bd2c6\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-qkg2q" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.873206 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/32bb589d-b6b8-4ab2-a9a2-5bae968bd2c6-client-ca\") pod \"route-controller-manager-776cdc94d6-qkg2q\" (UID: \"32bb589d-b6b8-4ab2-a9a2-5bae968bd2c6\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-qkg2q" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.873227 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/695dd41c-159e-4e22-98e5-e27fdf4296fd-serving-cert\") pod \"apiserver-9ddfb9f55-8h8fl\" (UID: \"695dd41c-159e-4e22-98e5-e27fdf4296fd\") " pod="openshift-apiserver/apiserver-9ddfb9f55-8h8fl" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.873242 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/742843af-c521-4d4a-beea-e6feae8140e1-config-volume\") pod \"collect-profiles-29420250-qhrfp\" (UID: \"742843af-c521-4d4a-beea-e6feae8140e1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420250-qhrfp" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.873260 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1a749ad3-837c-4804-b23c-2abb017b5b82-images\") pod \"machine-api-operator-755bb95488-5httz\" (UID: \"1a749ad3-837c-4804-b23c-2abb017b5b82\") " pod="openshift-machine-api/machine-api-operator-755bb95488-5httz" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.873274 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/695dd41c-159e-4e22-98e5-e27fdf4296fd-image-import-ca\") pod \"apiserver-9ddfb9f55-8h8fl\" (UID: \"695dd41c-159e-4e22-98e5-e27fdf4296fd\") " pod="openshift-apiserver/apiserver-9ddfb9f55-8h8fl" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.873303 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/28b33fd8-46b7-46e9-bef9-ec6b3f035300-kube-api-access\") pod \"kube-apiserver-operator-575994946d-bhk9x\" (UID: \"28b33fd8-46b7-46e9-bef9-ec6b3f035300\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-bhk9x" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.873338 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2ecc2ce3-fe03-4f16-9dfd-4a8b1b2b224f-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-4g75z\" (UID: \"2ecc2ce3-fe03-4f16-9dfd-4a8b1b2b224f\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-4g75z" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.873386 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8dcd2702-e20f-439b-b2c7-27095126b87e-config\") pod \"controller-manager-65b6cccf98-6wjgz\" (UID: \"8dcd2702-e20f-439b-b2c7-27095126b87e\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-6wjgz" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.873414 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/8dcd2702-e20f-439b-b2c7-27095126b87e-tmp\") pod \"controller-manager-65b6cccf98-6wjgz\" (UID: \"8dcd2702-e20f-439b-b2c7-27095126b87e\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-6wjgz" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.873436 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhvr7\" (UniqueName: \"kubernetes.io/projected/1a749ad3-837c-4804-b23c-2abb017b5b82-kube-api-access-lhvr7\") pod \"machine-api-operator-755bb95488-5httz\" (UID: \"1a749ad3-837c-4804-b23c-2abb017b5b82\") " pod="openshift-machine-api/machine-api-operator-755bb95488-5httz" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.873458 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/695dd41c-159e-4e22-98e5-e27fdf4296fd-audit\") pod \"apiserver-9ddfb9f55-8h8fl\" (UID: \"695dd41c-159e-4e22-98e5-e27fdf4296fd\") " pod="openshift-apiserver/apiserver-9ddfb9f55-8h8fl" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.873485 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/742843af-c521-4d4a-beea-e6feae8140e1-secret-volume\") pod \"collect-profiles-29420250-qhrfp\" (UID: \"742843af-c521-4d4a-beea-e6feae8140e1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420250-qhrfp" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.873505 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2ecc2ce3-fe03-4f16-9dfd-4a8b1b2b224f-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-4g75z\" (UID: \"2ecc2ce3-fe03-4f16-9dfd-4a8b1b2b224f\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-4g75z" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.873527 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8dcd2702-e20f-439b-b2c7-27095126b87e-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-6wjgz\" (UID: \"8dcd2702-e20f-439b-b2c7-27095126b87e\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-6wjgz" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.873559 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a9ac21c-f3fb-42c7-a5ce-096d015b8d3c-serving-cert\") pod \"apiserver-8596bd845d-rdv9c\" (UID: \"3a9ac21c-f3fb-42c7-a5ce-096d015b8d3c\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-rdv9c" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.873591 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwkwh\" (UniqueName: \"kubernetes.io/projected/f5c1e280-e9c9-4a30-bb13-023852fd940b-kube-api-access-gwkwh\") pod \"multus-admission-controller-69db94689b-v9sxk\" (UID: \"f5c1e280-e9c9-4a30-bb13-023852fd940b\") " pod="openshift-multus/multus-admission-controller-69db94689b-v9sxk" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.873625 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/837f85a8-fff5-46a0-b1d5-2d51271f415a-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-q6lj7\" (UID: \"837f85a8-fff5-46a0-b1d5-2d51271f415a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-q6lj7" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.873643 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/695dd41c-159e-4e22-98e5-e27fdf4296fd-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-8h8fl\" (UID: \"695dd41c-159e-4e22-98e5-e27fdf4296fd\") " pod="openshift-apiserver/apiserver-9ddfb9f55-8h8fl" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.873662 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/28b33fd8-46b7-46e9-bef9-ec6b3f035300-tmp-dir\") pod \"kube-apiserver-operator-575994946d-bhk9x\" (UID: \"28b33fd8-46b7-46e9-bef9-ec6b3f035300\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-bhk9x" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.873701 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/1a749ad3-837c-4804-b23c-2abb017b5b82-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-5httz\" (UID: \"1a749ad3-837c-4804-b23c-2abb017b5b82\") " pod="openshift-machine-api/machine-api-operator-755bb95488-5httz" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.873717 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28b33fd8-46b7-46e9-bef9-ec6b3f035300-config\") pod \"kube-apiserver-operator-575994946d-bhk9x\" (UID: \"28b33fd8-46b7-46e9-bef9-ec6b3f035300\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-bhk9x" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.875270 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-8h8fl"] Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.875298 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-q6lj7"] Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.875310 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-qkg2q"] Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.875319 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-rdv9c"] Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.875329 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-5httz"] Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.875339 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-2cnx5"] Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.875748 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-799b87ffcd-9b988" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.877903 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-c5tbq"] Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.878051 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-sa-dockercfg-wzhvk\"" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.878618 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-s6hn4" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.882445 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-67c89758df-79mps"] Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.882734 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-2cnx5" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.890408 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-k26tc"] Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.891644 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-c5tbq" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.892417 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-67c89758df-79mps" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.895984 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console/console-64d44f6ddf-dhfvx"] Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.896117 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-69b85846b6-k26tc" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.900317 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-m5ltz"] Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.900518 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-64d44f6ddf-dhfvx" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.900916 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-root-ca.crt\"" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.910146 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-2pwhz"] Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.910299 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-m5ltz" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.914314 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-6gkgz"] Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.914481 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-2pwhz" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.917141 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-tls\"" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.925171 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-ztdrc"] Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.925482 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-6gkgz" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.929725 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-5scww"] Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.929898 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-ztdrc" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.932660 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29420250-qhrfp"] Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.932804 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-5scww" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.932853 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-p88k2"] Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.932891 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-v69x6"] Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.936206 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-psb45"] Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.936276 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"openshift-service-ca.crt\"" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.936389 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-5777786469-v69x6" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.938963 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-dhfht"] Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.938989 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-cdz4v"] Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.938998 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-4kjg6"] Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.939007 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-4g75z"] Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.939016 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-gvb6q"] Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.939025 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-6wjgz"] Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.939032 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-d8qsj"] Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.939040 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-bl822"] Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.939049 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-bhk9x"] Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.939059 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-5pp5q"] Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.939082 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-psb45" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.939109 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-rwgjl"] Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.939220 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-85wdh"] Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.939235 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-bdhnb"] Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.942378 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-ggh59"] Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.942434 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-qrls7"] Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.942494 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-bdhnb" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.950843 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-2cnx5"] Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.950900 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-s6hn4"] Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.950975 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-qrls7" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.951000 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-v9sxk"] Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.952865 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-c5tbq"] Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.954202 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-67c89758df-79mps"] Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.956934 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"openshift-service-ca.crt\"" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.963219 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-5scww"] Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.965826 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-qrls7"] Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.966915 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-m5ltz"] Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.967918 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-74545575db-d69qv"] Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.969074 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-6gkgz"] Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.970203 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-9b988"] Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.971341 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-747b44746d-x7wvx"] Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.973716 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-k26tc"] Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.974416 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/32bb589d-b6b8-4ab2-a9a2-5bae968bd2c6-client-ca\") pod \"route-controller-manager-776cdc94d6-qkg2q\" (UID: \"32bb589d-b6b8-4ab2-a9a2-5bae968bd2c6\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-qkg2q" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.974446 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/695dd41c-159e-4e22-98e5-e27fdf4296fd-serving-cert\") pod \"apiserver-9ddfb9f55-8h8fl\" (UID: \"695dd41c-159e-4e22-98e5-e27fdf4296fd\") " pod="openshift-apiserver/apiserver-9ddfb9f55-8h8fl" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.974464 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/742843af-c521-4d4a-beea-e6feae8140e1-config-volume\") pod \"collect-profiles-29420250-qhrfp\" (UID: \"742843af-c521-4d4a-beea-e6feae8140e1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420250-qhrfp" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.974480 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1a749ad3-837c-4804-b23c-2abb017b5b82-images\") pod \"machine-api-operator-755bb95488-5httz\" (UID: \"1a749ad3-837c-4804-b23c-2abb017b5b82\") " pod="openshift-machine-api/machine-api-operator-755bb95488-5httz" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.974497 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/695dd41c-159e-4e22-98e5-e27fdf4296fd-image-import-ca\") pod \"apiserver-9ddfb9f55-8h8fl\" (UID: \"695dd41c-159e-4e22-98e5-e27fdf4296fd\") " pod="openshift-apiserver/apiserver-9ddfb9f55-8h8fl" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.974514 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/28b33fd8-46b7-46e9-bef9-ec6b3f035300-kube-api-access\") pod \"kube-apiserver-operator-575994946d-bhk9x\" (UID: \"28b33fd8-46b7-46e9-bef9-ec6b3f035300\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-bhk9x" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.974531 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2ecc2ce3-fe03-4f16-9dfd-4a8b1b2b224f-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-4g75z\" (UID: \"2ecc2ce3-fe03-4f16-9dfd-4a8b1b2b224f\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-4g75z" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.974553 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8dcd2702-e20f-439b-b2c7-27095126b87e-config\") pod \"controller-manager-65b6cccf98-6wjgz\" (UID: \"8dcd2702-e20f-439b-b2c7-27095126b87e\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-6wjgz" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.975391 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-6lgwk"] Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.975433 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/32bb589d-b6b8-4ab2-a9a2-5bae968bd2c6-client-ca\") pod \"route-controller-manager-776cdc94d6-qkg2q\" (UID: \"32bb589d-b6b8-4ab2-a9a2-5bae968bd2c6\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-qkg2q" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.975487 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/8dcd2702-e20f-439b-b2c7-27095126b87e-tmp\") pod \"controller-manager-65b6cccf98-6wjgz\" (UID: \"8dcd2702-e20f-439b-b2c7-27095126b87e\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-6wjgz" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.975512 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lhvr7\" (UniqueName: \"kubernetes.io/projected/1a749ad3-837c-4804-b23c-2abb017b5b82-kube-api-access-lhvr7\") pod \"machine-api-operator-755bb95488-5httz\" (UID: \"1a749ad3-837c-4804-b23c-2abb017b5b82\") " pod="openshift-machine-api/machine-api-operator-755bb95488-5httz" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.975531 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/695dd41c-159e-4e22-98e5-e27fdf4296fd-audit\") pod \"apiserver-9ddfb9f55-8h8fl\" (UID: \"695dd41c-159e-4e22-98e5-e27fdf4296fd\") " pod="openshift-apiserver/apiserver-9ddfb9f55-8h8fl" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.975549 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/742843af-c521-4d4a-beea-e6feae8140e1-secret-volume\") pod \"collect-profiles-29420250-qhrfp\" (UID: \"742843af-c521-4d4a-beea-e6feae8140e1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420250-qhrfp" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.975567 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2ecc2ce3-fe03-4f16-9dfd-4a8b1b2b224f-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-4g75z\" (UID: \"2ecc2ce3-fe03-4f16-9dfd-4a8b1b2b224f\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-4g75z" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.975589 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/0b3a0959-d09e-4fd8-b931-d85bb42a3896-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-dhfht\" (UID: \"0b3a0959-d09e-4fd8-b931-d85bb42a3896\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-dhfht" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.975612 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8dcd2702-e20f-439b-b2c7-27095126b87e-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-6wjgz\" (UID: \"8dcd2702-e20f-439b-b2c7-27095126b87e\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-6wjgz" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.975629 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vp59p\" (UniqueName: \"kubernetes.io/projected/0b3a0959-d09e-4fd8-b931-d85bb42a3896-kube-api-access-vp59p\") pod \"control-plane-machine-set-operator-75ffdb6fcd-dhfht\" (UID: \"0b3a0959-d09e-4fd8-b931-d85bb42a3896\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-dhfht" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.975648 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/9a815eca-9800-4b68-adc1-5953173f4427-srv-cert\") pod \"catalog-operator-75ff9f647d-bl822\" (UID: \"9a815eca-9800-4b68-adc1-5953173f4427\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-bl822" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.975665 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpbb7\" (UniqueName: \"kubernetes.io/projected/2554c491-6bfb-47fd-9b76-c1da12e702b1-kube-api-access-lpbb7\") pod \"service-ca-operator-5b9c976747-cdz4v\" (UID: \"2554c491-6bfb-47fd-9b76-c1da12e702b1\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-cdz4v" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.975683 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1cd09f9c-6a6f-438a-a982-082edc35a55c-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-rwgjl\" (UID: \"1cd09f9c-6a6f-438a-a982-082edc35a55c\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-rwgjl" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.975706 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a9ac21c-f3fb-42c7-a5ce-096d015b8d3c-serving-cert\") pod \"apiserver-8596bd845d-rdv9c\" (UID: \"3a9ac21c-f3fb-42c7-a5ce-096d015b8d3c\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-rdv9c" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.975723 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/9af82654-06bc-4376-bff5-d6adacce9785-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-85wdh\" (UID: \"9af82654-06bc-4376-bff5-d6adacce9785\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-85wdh" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.975741 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fr6z\" (UniqueName: \"kubernetes.io/projected/fe85cb02-2d21-4fc3-92c1-6d060a006011-kube-api-access-5fr6z\") pod \"router-default-68cf44c8b8-rscz2\" (UID: \"fe85cb02-2d21-4fc3-92c1-6d060a006011\") " pod="openshift-ingress/router-default-68cf44c8b8-rscz2" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.975759 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-785pk\" (UniqueName: \"kubernetes.io/projected/085a3a20-9b8f-4448-a4cb-89465f57027c-kube-api-access-785pk\") pod \"packageserver-7d4fc7d867-4kjg6\" (UID: \"085a3a20-9b8f-4448-a4cb-89465f57027c\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-4kjg6" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.975781 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwkwh\" (UniqueName: \"kubernetes.io/projected/f5c1e280-e9c9-4a30-bb13-023852fd940b-kube-api-access-gwkwh\") pod \"multus-admission-controller-69db94689b-v9sxk\" (UID: \"f5c1e280-e9c9-4a30-bb13-023852fd940b\") " pod="openshift-multus/multus-admission-controller-69db94689b-v9sxk" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.975799 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/837f85a8-fff5-46a0-b1d5-2d51271f415a-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-q6lj7\" (UID: \"837f85a8-fff5-46a0-b1d5-2d51271f415a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-q6lj7" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.975816 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fsgh\" (UniqueName: \"kubernetes.io/projected/f22fa87e-79cb-498c-a2ab-166d47fd70a5-kube-api-access-9fsgh\") pod \"cluster-samples-operator-6b564684c8-2cnx5\" (UID: \"f22fa87e-79cb-498c-a2ab-166d47fd70a5\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-2cnx5" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.975835 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1cd09f9c-6a6f-438a-a982-082edc35a55c-tmp\") pod \"cluster-image-registry-operator-86c45576b9-rwgjl\" (UID: \"1cd09f9c-6a6f-438a-a982-082edc35a55c\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-rwgjl" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.975855 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/695dd41c-159e-4e22-98e5-e27fdf4296fd-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-8h8fl\" (UID: \"695dd41c-159e-4e22-98e5-e27fdf4296fd\") " pod="openshift-apiserver/apiserver-9ddfb9f55-8h8fl" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.975870 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/8dcd2702-e20f-439b-b2c7-27095126b87e-tmp\") pod \"controller-manager-65b6cccf98-6wjgz\" (UID: \"8dcd2702-e20f-439b-b2c7-27095126b87e\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-6wjgz" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.975911 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/1cd09f9c-6a6f-438a-a982-082edc35a55c-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-rwgjl\" (UID: \"1cd09f9c-6a6f-438a-a982-082edc35a55c\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-rwgjl" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.975985 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-2pwhz"] Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.976016 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/28b33fd8-46b7-46e9-bef9-ec6b3f035300-tmp-dir\") pod \"kube-apiserver-operator-575994946d-bhk9x\" (UID: \"28b33fd8-46b7-46e9-bef9-ec6b3f035300\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-bhk9x" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.976053 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/085a3a20-9b8f-4448-a4cb-89465f57027c-tmpfs\") pod \"packageserver-7d4fc7d867-4kjg6\" (UID: \"085a3a20-9b8f-4448-a4cb-89465f57027c\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-4kjg6" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.976238 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/695dd41c-159e-4e22-98e5-e27fdf4296fd-audit\") pod \"apiserver-9ddfb9f55-8h8fl\" (UID: \"695dd41c-159e-4e22-98e5-e27fdf4296fd\") " pod="openshift-apiserver/apiserver-9ddfb9f55-8h8fl" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.976324 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/1a749ad3-837c-4804-b23c-2abb017b5b82-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-5httz\" (UID: \"1a749ad3-837c-4804-b23c-2abb017b5b82\") " pod="openshift-machine-api/machine-api-operator-755bb95488-5httz" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.976358 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28b33fd8-46b7-46e9-bef9-ec6b3f035300-config\") pod \"kube-apiserver-operator-575994946d-bhk9x\" (UID: \"28b33fd8-46b7-46e9-bef9-ec6b3f035300\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-bhk9x" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.976390 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9af82654-06bc-4376-bff5-d6adacce9785-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-85wdh\" (UID: \"9af82654-06bc-4376-bff5-d6adacce9785\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-85wdh" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.976451 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/9a815eca-9800-4b68-adc1-5953173f4427-tmpfs\") pod \"catalog-operator-75ff9f647d-bl822\" (UID: \"9a815eca-9800-4b68-adc1-5953173f4427\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-bl822" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.976475 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a52d6e07-c08e-4424-8a3f-50052c311604-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-gvb6q\" (UID: \"a52d6e07-c08e-4424-8a3f-50052c311604\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-gvb6q" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.976500 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/32bb589d-b6b8-4ab2-a9a2-5bae968bd2c6-tmp\") pod \"route-controller-manager-776cdc94d6-qkg2q\" (UID: \"32bb589d-b6b8-4ab2-a9a2-5bae968bd2c6\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-qkg2q" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.976525 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mb6n9\" (UniqueName: \"kubernetes.io/projected/2ecc2ce3-fe03-4f16-9dfd-4a8b1b2b224f-kube-api-access-mb6n9\") pod \"machine-config-operator-67c9d58cbb-4g75z\" (UID: \"2ecc2ce3-fe03-4f16-9dfd-4a8b1b2b224f\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-4g75z" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.976539 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/28b33fd8-46b7-46e9-bef9-ec6b3f035300-tmp-dir\") pod \"kube-apiserver-operator-575994946d-bhk9x\" (UID: \"28b33fd8-46b7-46e9-bef9-ec6b3f035300\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-bhk9x" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.976548 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fe85cb02-2d21-4fc3-92c1-6d060a006011-metrics-certs\") pod \"router-default-68cf44c8b8-rscz2\" (UID: \"fe85cb02-2d21-4fc3-92c1-6d060a006011\") " pod="openshift-ingress/router-default-68cf44c8b8-rscz2" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.976657 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8dcd2702-e20f-439b-b2c7-27095126b87e-serving-cert\") pod \"controller-manager-65b6cccf98-6wjgz\" (UID: \"8dcd2702-e20f-439b-b2c7-27095126b87e\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-6wjgz" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.976687 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a749ad3-837c-4804-b23c-2abb017b5b82-config\") pod \"machine-api-operator-755bb95488-5httz\" (UID: \"1a749ad3-837c-4804-b23c-2abb017b5b82\") " pod="openshift-machine-api/machine-api-operator-755bb95488-5httz" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.976710 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8wb2r\" (UniqueName: \"kubernetes.io/projected/837f85a8-fff5-46a0-b1d5-2d51271f415a-kube-api-access-8wb2r\") pod \"openshift-apiserver-operator-846cbfc458-q6lj7\" (UID: \"837f85a8-fff5-46a0-b1d5-2d51271f415a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-q6lj7" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.976733 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/085a3a20-9b8f-4448-a4cb-89465f57027c-apiservice-cert\") pod \"packageserver-7d4fc7d867-4kjg6\" (UID: \"085a3a20-9b8f-4448-a4cb-89465f57027c\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-4kjg6" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.976938 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8dcd2702-e20f-439b-b2c7-27095126b87e-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-6wjgz\" (UID: \"8dcd2702-e20f-439b-b2c7-27095126b87e\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-6wjgz" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.977224 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/695dd41c-159e-4e22-98e5-e27fdf4296fd-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-8h8fl\" (UID: \"695dd41c-159e-4e22-98e5-e27fdf4296fd\") " pod="openshift-apiserver/apiserver-9ddfb9f55-8h8fl" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.977556 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/742843af-c521-4d4a-beea-e6feae8140e1-config-volume\") pod \"collect-profiles-29420250-qhrfp\" (UID: \"742843af-c521-4d4a-beea-e6feae8140e1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420250-qhrfp" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.977805 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1a749ad3-837c-4804-b23c-2abb017b5b82-images\") pod \"machine-api-operator-755bb95488-5httz\" (UID: \"1a749ad3-837c-4804-b23c-2abb017b5b82\") " pod="openshift-machine-api/machine-api-operator-755bb95488-5httz" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.977859 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-64d44f6ddf-dhfvx"] Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.977901 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"default-dockercfg-mdwwj\"" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.978065 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a749ad3-837c-4804-b23c-2abb017b5b82-config\") pod \"machine-api-operator-755bb95488-5httz\" (UID: \"1a749ad3-837c-4804-b23c-2abb017b5b82\") " pod="openshift-machine-api/machine-api-operator-755bb95488-5httz" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.978339 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8dcd2702-e20f-439b-b2c7-27095126b87e-config\") pod \"controller-manager-65b6cccf98-6wjgz\" (UID: \"8dcd2702-e20f-439b-b2c7-27095126b87e\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-6wjgz" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.978393 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28b33fd8-46b7-46e9-bef9-ec6b3f035300-config\") pod \"kube-apiserver-operator-575994946d-bhk9x\" (UID: \"28b33fd8-46b7-46e9-bef9-ec6b3f035300\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-bhk9x" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.978422 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-ztdrc"] Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.978477 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/695dd41c-159e-4e22-98e5-e27fdf4296fd-image-import-ca\") pod \"apiserver-9ddfb9f55-8h8fl\" (UID: \"695dd41c-159e-4e22-98e5-e27fdf4296fd\") " pod="openshift-apiserver/apiserver-9ddfb9f55-8h8fl" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.978904 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2ecc2ce3-fe03-4f16-9dfd-4a8b1b2b224f-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-4g75z\" (UID: \"2ecc2ce3-fe03-4f16-9dfd-4a8b1b2b224f\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-4g75z" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.978930 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3a9ac21c-f3fb-42c7-a5ce-096d015b8d3c-trusted-ca-bundle\") pod \"apiserver-8596bd845d-rdv9c\" (UID: \"3a9ac21c-f3fb-42c7-a5ce-096d015b8d3c\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-rdv9c" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.978975 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/f22fa87e-79cb-498c-a2ab-166d47fd70a5-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-2cnx5\" (UID: \"f22fa87e-79cb-498c-a2ab-166d47fd70a5\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-2cnx5" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.979097 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/2ecc2ce3-fe03-4f16-9dfd-4a8b1b2b224f-images\") pod \"machine-config-operator-67c9d58cbb-4g75z\" (UID: \"2ecc2ce3-fe03-4f16-9dfd-4a8b1b2b224f\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-4g75z" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.979149 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2554c491-6bfb-47fd-9b76-c1da12e702b1-serving-cert\") pod \"service-ca-operator-5b9c976747-cdz4v\" (UID: \"2554c491-6bfb-47fd-9b76-c1da12e702b1\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-cdz4v" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.979132 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/32bb589d-b6b8-4ab2-a9a2-5bae968bd2c6-tmp\") pod \"route-controller-manager-776cdc94d6-qkg2q\" (UID: \"32bb589d-b6b8-4ab2-a9a2-5bae968bd2c6\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-qkg2q" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.979420 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3a9ac21c-f3fb-42c7-a5ce-096d015b8d3c-trusted-ca-bundle\") pod \"apiserver-8596bd845d-rdv9c\" (UID: \"3a9ac21c-f3fb-42c7-a5ce-096d015b8d3c\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-rdv9c" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.979637 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/837f85a8-fff5-46a0-b1d5-2d51271f415a-config\") pod \"openshift-apiserver-operator-846cbfc458-q6lj7\" (UID: \"837f85a8-fff5-46a0-b1d5-2d51271f415a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-q6lj7" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.979671 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3a9ac21c-f3fb-42c7-a5ce-096d015b8d3c-etcd-client\") pod \"apiserver-8596bd845d-rdv9c\" (UID: \"3a9ac21c-f3fb-42c7-a5ce-096d015b8d3c\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-rdv9c" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.979694 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a9ac21c-f3fb-42c7-a5ce-096d015b8d3c-audit-dir\") pod \"apiserver-8596bd845d-rdv9c\" (UID: \"3a9ac21c-f3fb-42c7-a5ce-096d015b8d3c\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-rdv9c" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.979751 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mm8b\" (UniqueName: \"kubernetes.io/projected/9af82654-06bc-4376-bff5-d6adacce9785-kube-api-access-2mm8b\") pod \"marketplace-operator-547dbd544d-85wdh\" (UID: \"9af82654-06bc-4376-bff5-d6adacce9785\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-85wdh" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.979758 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/2ecc2ce3-fe03-4f16-9dfd-4a8b1b2b224f-images\") pod \"machine-config-operator-67c9d58cbb-4g75z\" (UID: \"2ecc2ce3-fe03-4f16-9dfd-4a8b1b2b224f\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-4g75z" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.979799 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2554c491-6bfb-47fd-9b76-c1da12e702b1-config\") pod \"service-ca-operator-5b9c976747-cdz4v\" (UID: \"2554c491-6bfb-47fd-9b76-c1da12e702b1\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-cdz4v" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.979831 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/1cd09f9c-6a6f-438a-a982-082edc35a55c-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-rwgjl\" (UID: \"1cd09f9c-6a6f-438a-a982-082edc35a55c\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-rwgjl" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.979803 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a9ac21c-f3fb-42c7-a5ce-096d015b8d3c-audit-dir\") pod \"apiserver-8596bd845d-rdv9c\" (UID: \"3a9ac21c-f3fb-42c7-a5ce-096d015b8d3c\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-rdv9c" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.979932 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8dcd2702-e20f-439b-b2c7-27095126b87e-client-ca\") pod \"controller-manager-65b6cccf98-6wjgz\" (UID: \"8dcd2702-e20f-439b-b2c7-27095126b87e\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-6wjgz" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.979973 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/9a815eca-9800-4b68-adc1-5953173f4427-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-bl822\" (UID: \"9a815eca-9800-4b68-adc1-5953173f4427\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-bl822" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.979998 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a52d6e07-c08e-4424-8a3f-50052c311604-config\") pod \"openshift-kube-scheduler-operator-54f497555d-gvb6q\" (UID: \"a52d6e07-c08e-4424-8a3f-50052c311604\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-gvb6q" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.980323 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/837f85a8-fff5-46a0-b1d5-2d51271f415a-config\") pod \"openshift-apiserver-operator-846cbfc458-q6lj7\" (UID: \"837f85a8-fff5-46a0-b1d5-2d51271f415a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-q6lj7" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.980768 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8dcd2702-e20f-439b-b2c7-27095126b87e-client-ca\") pod \"controller-manager-65b6cccf98-6wjgz\" (UID: \"8dcd2702-e20f-439b-b2c7-27095126b87e\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-6wjgz" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.980840 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/695dd41c-159e-4e22-98e5-e27fdf4296fd-etcd-client\") pod \"apiserver-9ddfb9f55-8h8fl\" (UID: \"695dd41c-159e-4e22-98e5-e27fdf4296fd\") " pod="openshift-apiserver/apiserver-9ddfb9f55-8h8fl" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.980906 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/695dd41c-159e-4e22-98e5-e27fdf4296fd-encryption-config\") pod \"apiserver-9ddfb9f55-8h8fl\" (UID: \"695dd41c-159e-4e22-98e5-e27fdf4296fd\") " pod="openshift-apiserver/apiserver-9ddfb9f55-8h8fl" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.980936 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a52d6e07-c08e-4424-8a3f-50052c311604-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-gvb6q\" (UID: \"a52d6e07-c08e-4424-8a3f-50052c311604\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-gvb6q" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.980966 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/9148080a-77e2-4847-840a-d67f837c8fbe-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-d8qsj\" (UID: \"9148080a-77e2-4847-840a-d67f837c8fbe\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-d8qsj" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.981052 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-v69x6"] Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.981320 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2t9zh\" (UniqueName: \"kubernetes.io/projected/32bb589d-b6b8-4ab2-a9a2-5bae968bd2c6-kube-api-access-2t9zh\") pod \"route-controller-manager-776cdc94d6-qkg2q\" (UID: \"32bb589d-b6b8-4ab2-a9a2-5bae968bd2c6\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-qkg2q" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.981440 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/28b33fd8-46b7-46e9-bef9-ec6b3f035300-serving-cert\") pod \"kube-apiserver-operator-575994946d-bhk9x\" (UID: \"28b33fd8-46b7-46e9-bef9-ec6b3f035300\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-bhk9x" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.981469 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bwvsf\" (UniqueName: \"kubernetes.io/projected/3a9ac21c-f3fb-42c7-a5ce-096d015b8d3c-kube-api-access-bwvsf\") pod \"apiserver-8596bd845d-rdv9c\" (UID: \"3a9ac21c-f3fb-42c7-a5ce-096d015b8d3c\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-rdv9c" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.981540 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pp8xf\" (UniqueName: \"kubernetes.io/projected/82728066-0204-4d71-acff-8779194a3e3c-kube-api-access-pp8xf\") pod \"migrator-866fcbc849-5pp5q\" (UID: \"82728066-0204-4d71-acff-8779194a3e3c\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-5pp5q" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.981664 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2drtz\" (UniqueName: \"kubernetes.io/projected/39c08b26-3404-4ffd-a53a-c86f0c654db7-kube-api-access-2drtz\") pod \"downloads-747b44746d-x7wvx\" (UID: \"39c08b26-3404-4ffd-a53a-c86f0c654db7\") " pod="openshift-console/downloads-747b44746d-x7wvx" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.982690 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-psjrr"] Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.982777 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2ecc2ce3-fe03-4f16-9dfd-4a8b1b2b224f-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-4g75z\" (UID: \"2ecc2ce3-fe03-4f16-9dfd-4a8b1b2b224f\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-4g75z" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.982841 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/742843af-c521-4d4a-beea-e6feae8140e1-secret-volume\") pod \"collect-profiles-29420250-qhrfp\" (UID: \"742843af-c521-4d4a-beea-e6feae8140e1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420250-qhrfp" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.982934 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/695dd41c-159e-4e22-98e5-e27fdf4296fd-node-pullsecrets\") pod \"apiserver-9ddfb9f55-8h8fl\" (UID: \"695dd41c-159e-4e22-98e5-e27fdf4296fd\") " pod="openshift-apiserver/apiserver-9ddfb9f55-8h8fl" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.983000 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a9ac21c-f3fb-42c7-a5ce-096d015b8d3c-serving-cert\") pod \"apiserver-8596bd845d-rdv9c\" (UID: \"3a9ac21c-f3fb-42c7-a5ce-096d015b8d3c\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-rdv9c" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.983013 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/3a9ac21c-f3fb-42c7-a5ce-096d015b8d3c-etcd-serving-ca\") pod \"apiserver-8596bd845d-rdv9c\" (UID: \"3a9ac21c-f3fb-42c7-a5ce-096d015b8d3c\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-rdv9c" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.983005 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/695dd41c-159e-4e22-98e5-e27fdf4296fd-node-pullsecrets\") pod \"apiserver-9ddfb9f55-8h8fl\" (UID: \"695dd41c-159e-4e22-98e5-e27fdf4296fd\") " pod="openshift-apiserver/apiserver-9ddfb9f55-8h8fl" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.983115 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/695dd41c-159e-4e22-98e5-e27fdf4296fd-config\") pod \"apiserver-9ddfb9f55-8h8fl\" (UID: \"695dd41c-159e-4e22-98e5-e27fdf4296fd\") " pod="openshift-apiserver/apiserver-9ddfb9f55-8h8fl" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.983151 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/fe85cb02-2d21-4fc3-92c1-6d060a006011-stats-auth\") pod \"router-default-68cf44c8b8-rscz2\" (UID: \"fe85cb02-2d21-4fc3-92c1-6d060a006011\") " pod="openshift-ingress/router-default-68cf44c8b8-rscz2" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.983181 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rt495\" (UniqueName: \"kubernetes.io/projected/9a815eca-9800-4b68-adc1-5953173f4427-kube-api-access-rt495\") pod \"catalog-operator-75ff9f647d-bl822\" (UID: \"9a815eca-9800-4b68-adc1-5953173f4427\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-bl822" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.983211 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3a9ac21c-f3fb-42c7-a5ce-096d015b8d3c-audit-policies\") pod \"apiserver-8596bd845d-rdv9c\" (UID: \"3a9ac21c-f3fb-42c7-a5ce-096d015b8d3c\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-rdv9c" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.983235 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txll4\" (UniqueName: \"kubernetes.io/projected/9148080a-77e2-4847-840a-d67f837c8fbe-kube-api-access-txll4\") pod \"package-server-manager-77f986bd66-d8qsj\" (UID: \"9148080a-77e2-4847-840a-d67f837c8fbe\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-d8qsj" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.983265 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/695dd41c-159e-4e22-98e5-e27fdf4296fd-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-8h8fl\" (UID: \"695dd41c-159e-4e22-98e5-e27fdf4296fd\") " pod="openshift-apiserver/apiserver-9ddfb9f55-8h8fl" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.983286 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/695dd41c-159e-4e22-98e5-e27fdf4296fd-audit-dir\") pod \"apiserver-9ddfb9f55-8h8fl\" (UID: \"695dd41c-159e-4e22-98e5-e27fdf4296fd\") " pod="openshift-apiserver/apiserver-9ddfb9f55-8h8fl" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.983295 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3a9ac21c-f3fb-42c7-a5ce-096d015b8d3c-etcd-client\") pod \"apiserver-8596bd845d-rdv9c\" (UID: \"3a9ac21c-f3fb-42c7-a5ce-096d015b8d3c\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-rdv9c" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.983306 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-nsfwp\" (UniqueName: \"kubernetes.io/projected/695dd41c-159e-4e22-98e5-e27fdf4296fd-kube-api-access-nsfwp\") pod \"apiserver-9ddfb9f55-8h8fl\" (UID: \"695dd41c-159e-4e22-98e5-e27fdf4296fd\") " pod="openshift-apiserver/apiserver-9ddfb9f55-8h8fl" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.983366 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fe85cb02-2d21-4fc3-92c1-6d060a006011-service-ca-bundle\") pod \"router-default-68cf44c8b8-rscz2\" (UID: \"fe85cb02-2d21-4fc3-92c1-6d060a006011\") " pod="openshift-ingress/router-default-68cf44c8b8-rscz2" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.983404 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/085a3a20-9b8f-4448-a4cb-89465f57027c-webhook-cert\") pod \"packageserver-7d4fc7d867-4kjg6\" (UID: \"085a3a20-9b8f-4448-a4cb-89465f57027c\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-4kjg6" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.983647 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/3a9ac21c-f3fb-42c7-a5ce-096d015b8d3c-etcd-serving-ca\") pod \"apiserver-8596bd845d-rdv9c\" (UID: \"3a9ac21c-f3fb-42c7-a5ce-096d015b8d3c\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-rdv9c" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.983778 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/695dd41c-159e-4e22-98e5-e27fdf4296fd-config\") pod \"apiserver-9ddfb9f55-8h8fl\" (UID: \"695dd41c-159e-4e22-98e5-e27fdf4296fd\") " pod="openshift-apiserver/apiserver-9ddfb9f55-8h8fl" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.983794 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lkd6h\" (UniqueName: \"kubernetes.io/projected/8dcd2702-e20f-439b-b2c7-27095126b87e-kube-api-access-lkd6h\") pod \"controller-manager-65b6cccf98-6wjgz\" (UID: \"8dcd2702-e20f-439b-b2c7-27095126b87e\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-6wjgz" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.983785 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/695dd41c-159e-4e22-98e5-e27fdf4296fd-audit-dir\") pod \"apiserver-9ddfb9f55-8h8fl\" (UID: \"695dd41c-159e-4e22-98e5-e27fdf4296fd\") " pod="openshift-apiserver/apiserver-9ddfb9f55-8h8fl" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.983832 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/fe85cb02-2d21-4fc3-92c1-6d060a006011-default-certificate\") pod \"router-default-68cf44c8b8-rscz2\" (UID: \"fe85cb02-2d21-4fc3-92c1-6d060a006011\") " pod="openshift-ingress/router-default-68cf44c8b8-rscz2" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.983865 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1cd09f9c-6a6f-438a-a982-082edc35a55c-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-rwgjl\" (UID: \"1cd09f9c-6a6f-438a-a982-082edc35a55c\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-rwgjl" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.983930 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32bb589d-b6b8-4ab2-a9a2-5bae968bd2c6-config\") pod \"route-controller-manager-776cdc94d6-qkg2q\" (UID: \"32bb589d-b6b8-4ab2-a9a2-5bae968bd2c6\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-qkg2q" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.984013 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/3a9ac21c-f3fb-42c7-a5ce-096d015b8d3c-encryption-config\") pod \"apiserver-8596bd845d-rdv9c\" (UID: \"3a9ac21c-f3fb-42c7-a5ce-096d015b8d3c\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-rdv9c" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.984020 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3a9ac21c-f3fb-42c7-a5ce-096d015b8d3c-audit-policies\") pod \"apiserver-8596bd845d-rdv9c\" (UID: \"3a9ac21c-f3fb-42c7-a5ce-096d015b8d3c\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-rdv9c" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.984072 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f5c1e280-e9c9-4a30-bb13-023852fd940b-webhook-certs\") pod \"multus-admission-controller-69db94689b-v9sxk\" (UID: \"f5c1e280-e9c9-4a30-bb13-023852fd940b\") " pod="openshift-multus/multus-admission-controller-69db94689b-v9sxk" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.984109 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tcp6m\" (UniqueName: \"kubernetes.io/projected/742843af-c521-4d4a-beea-e6feae8140e1-kube-api-access-tcp6m\") pod \"collect-profiles-29420250-qhrfp\" (UID: \"742843af-c521-4d4a-beea-e6feae8140e1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420250-qhrfp" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.984136 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/9af82654-06bc-4376-bff5-d6adacce9785-tmp\") pod \"marketplace-operator-547dbd544d-85wdh\" (UID: \"9af82654-06bc-4376-bff5-d6adacce9785\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-85wdh" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.984164 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a52d6e07-c08e-4424-8a3f-50052c311604-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-gvb6q\" (UID: \"a52d6e07-c08e-4424-8a3f-50052c311604\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-gvb6q" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.984193 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/837f85a8-fff5-46a0-b1d5-2d51271f415a-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-q6lj7\" (UID: \"837f85a8-fff5-46a0-b1d5-2d51271f415a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-q6lj7" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.984197 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/32bb589d-b6b8-4ab2-a9a2-5bae968bd2c6-serving-cert\") pod \"route-controller-manager-776cdc94d6-qkg2q\" (UID: \"32bb589d-b6b8-4ab2-a9a2-5bae968bd2c6\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-qkg2q" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.984259 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7qlx\" (UniqueName: \"kubernetes.io/projected/1cd09f9c-6a6f-438a-a982-082edc35a55c-kube-api-access-m7qlx\") pod \"cluster-image-registry-operator-86c45576b9-rwgjl\" (UID: \"1cd09f9c-6a6f-438a-a982-082edc35a55c\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-rwgjl" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.984221 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/695dd41c-159e-4e22-98e5-e27fdf4296fd-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-8h8fl\" (UID: \"695dd41c-159e-4e22-98e5-e27fdf4296fd\") " pod="openshift-apiserver/apiserver-9ddfb9f55-8h8fl" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.985527 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32bb589d-b6b8-4ab2-a9a2-5bae968bd2c6-config\") pod \"route-controller-manager-776cdc94d6-qkg2q\" (UID: \"32bb589d-b6b8-4ab2-a9a2-5bae968bd2c6\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-qkg2q" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.985727 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8dcd2702-e20f-439b-b2c7-27095126b87e-serving-cert\") pod \"controller-manager-65b6cccf98-6wjgz\" (UID: \"8dcd2702-e20f-439b-b2c7-27095126b87e\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-6wjgz" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.985952 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/1a749ad3-837c-4804-b23c-2abb017b5b82-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-5httz\" (UID: \"1a749ad3-837c-4804-b23c-2abb017b5b82\") " pod="openshift-machine-api/machine-api-operator-755bb95488-5httz" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.985998 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/695dd41c-159e-4e22-98e5-e27fdf4296fd-encryption-config\") pod \"apiserver-9ddfb9f55-8h8fl\" (UID: \"695dd41c-159e-4e22-98e5-e27fdf4296fd\") " pod="openshift-apiserver/apiserver-9ddfb9f55-8h8fl" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.986852 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/28b33fd8-46b7-46e9-bef9-ec6b3f035300-serving-cert\") pod \"kube-apiserver-operator-575994946d-bhk9x\" (UID: \"28b33fd8-46b7-46e9-bef9-ec6b3f035300\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-bhk9x" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.986992 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f5c1e280-e9c9-4a30-bb13-023852fd940b-webhook-certs\") pod \"multus-admission-controller-69db94689b-v9sxk\" (UID: \"f5c1e280-e9c9-4a30-bb13-023852fd940b\") " pod="openshift-multus/multus-admission-controller-69db94689b-v9sxk" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.987163 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/32bb589d-b6b8-4ab2-a9a2-5bae968bd2c6-serving-cert\") pod \"route-controller-manager-776cdc94d6-qkg2q\" (UID: \"32bb589d-b6b8-4ab2-a9a2-5bae968bd2c6\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-qkg2q" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.987243 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/695dd41c-159e-4e22-98e5-e27fdf4296fd-etcd-client\") pod \"apiserver-9ddfb9f55-8h8fl\" (UID: \"695dd41c-159e-4e22-98e5-e27fdf4296fd\") " pod="openshift-apiserver/apiserver-9ddfb9f55-8h8fl" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.988669 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/3a9ac21c-f3fb-42c7-a5ce-096d015b8d3c-encryption-config\") pod \"apiserver-8596bd845d-rdv9c\" (UID: \"3a9ac21c-f3fb-42c7-a5ce-096d015b8d3c\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-rdv9c" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.990282 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-psjrr"] Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.990409 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-psjrr" Dec 08 17:44:15 crc kubenswrapper[5118]: I1208 17:44:15.997245 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"kube-root-ca.crt\"" Dec 08 17:44:16 crc kubenswrapper[5118]: I1208 17:44:16.021452 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"trusted-ca\"" Dec 08 17:44:16 crc kubenswrapper[5118]: I1208 17:44:16.037413 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"cluster-image-registry-operator-dockercfg-ntnd7\"" Dec 08 17:44:16 crc kubenswrapper[5118]: I1208 17:44:16.062003 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-operator-tls\"" Dec 08 17:44:16 crc kubenswrapper[5118]: I1208 17:44:16.085253 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rt495\" (UniqueName: \"kubernetes.io/projected/9a815eca-9800-4b68-adc1-5953173f4427-kube-api-access-rt495\") pod \"catalog-operator-75ff9f647d-bl822\" (UID: \"9a815eca-9800-4b68-adc1-5953173f4427\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-bl822" Dec 08 17:44:16 crc kubenswrapper[5118]: I1208 17:44:16.085312 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-txll4\" (UniqueName: \"kubernetes.io/projected/9148080a-77e2-4847-840a-d67f837c8fbe-kube-api-access-txll4\") pod \"package-server-manager-77f986bd66-d8qsj\" (UID: \"9148080a-77e2-4847-840a-d67f837c8fbe\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-d8qsj" Dec 08 17:44:16 crc kubenswrapper[5118]: I1208 17:44:16.085355 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fe85cb02-2d21-4fc3-92c1-6d060a006011-service-ca-bundle\") pod \"router-default-68cf44c8b8-rscz2\" (UID: \"fe85cb02-2d21-4fc3-92c1-6d060a006011\") " pod="openshift-ingress/router-default-68cf44c8b8-rscz2" Dec 08 17:44:16 crc kubenswrapper[5118]: I1208 17:44:16.085388 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/085a3a20-9b8f-4448-a4cb-89465f57027c-webhook-cert\") pod \"packageserver-7d4fc7d867-4kjg6\" (UID: \"085a3a20-9b8f-4448-a4cb-89465f57027c\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-4kjg6" Dec 08 17:44:16 crc kubenswrapper[5118]: I1208 17:44:16.085434 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/fe85cb02-2d21-4fc3-92c1-6d060a006011-default-certificate\") pod \"router-default-68cf44c8b8-rscz2\" (UID: \"fe85cb02-2d21-4fc3-92c1-6d060a006011\") " pod="openshift-ingress/router-default-68cf44c8b8-rscz2" Dec 08 17:44:16 crc kubenswrapper[5118]: I1208 17:44:16.085479 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1cd09f9c-6a6f-438a-a982-082edc35a55c-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-rwgjl\" (UID: \"1cd09f9c-6a6f-438a-a982-082edc35a55c\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-rwgjl" Dec 08 17:44:16 crc kubenswrapper[5118]: I1208 17:44:16.085564 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/9af82654-06bc-4376-bff5-d6adacce9785-tmp\") pod \"marketplace-operator-547dbd544d-85wdh\" (UID: \"9af82654-06bc-4376-bff5-d6adacce9785\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-85wdh" Dec 08 17:44:16 crc kubenswrapper[5118]: I1208 17:44:16.085630 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a52d6e07-c08e-4424-8a3f-50052c311604-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-gvb6q\" (UID: \"a52d6e07-c08e-4424-8a3f-50052c311604\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-gvb6q" Dec 08 17:44:16 crc kubenswrapper[5118]: I1208 17:44:16.085700 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-m7qlx\" (UniqueName: \"kubernetes.io/projected/1cd09f9c-6a6f-438a-a982-082edc35a55c-kube-api-access-m7qlx\") pod \"cluster-image-registry-operator-86c45576b9-rwgjl\" (UID: \"1cd09f9c-6a6f-438a-a982-082edc35a55c\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-rwgjl" Dec 08 17:44:16 crc kubenswrapper[5118]: I1208 17:44:16.085812 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/0b3a0959-d09e-4fd8-b931-d85bb42a3896-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-dhfht\" (UID: \"0b3a0959-d09e-4fd8-b931-d85bb42a3896\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-dhfht" Dec 08 17:44:16 crc kubenswrapper[5118]: I1208 17:44:16.085921 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vp59p\" (UniqueName: \"kubernetes.io/projected/0b3a0959-d09e-4fd8-b931-d85bb42a3896-kube-api-access-vp59p\") pod \"control-plane-machine-set-operator-75ffdb6fcd-dhfht\" (UID: \"0b3a0959-d09e-4fd8-b931-d85bb42a3896\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-dhfht" Dec 08 17:44:16 crc kubenswrapper[5118]: I1208 17:44:16.085975 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/9a815eca-9800-4b68-adc1-5953173f4427-srv-cert\") pod \"catalog-operator-75ff9f647d-bl822\" (UID: \"9a815eca-9800-4b68-adc1-5953173f4427\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-bl822" Dec 08 17:44:16 crc kubenswrapper[5118]: I1208 17:44:16.086026 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lpbb7\" (UniqueName: \"kubernetes.io/projected/2554c491-6bfb-47fd-9b76-c1da12e702b1-kube-api-access-lpbb7\") pod \"service-ca-operator-5b9c976747-cdz4v\" (UID: \"2554c491-6bfb-47fd-9b76-c1da12e702b1\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-cdz4v" Dec 08 17:44:16 crc kubenswrapper[5118]: I1208 17:44:16.086076 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1cd09f9c-6a6f-438a-a982-082edc35a55c-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-rwgjl\" (UID: \"1cd09f9c-6a6f-438a-a982-082edc35a55c\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-rwgjl" Dec 08 17:44:16 crc kubenswrapper[5118]: I1208 17:44:16.086134 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/9af82654-06bc-4376-bff5-d6adacce9785-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-85wdh\" (UID: \"9af82654-06bc-4376-bff5-d6adacce9785\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-85wdh" Dec 08 17:44:16 crc kubenswrapper[5118]: I1208 17:44:16.086184 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5fr6z\" (UniqueName: \"kubernetes.io/projected/fe85cb02-2d21-4fc3-92c1-6d060a006011-kube-api-access-5fr6z\") pod \"router-default-68cf44c8b8-rscz2\" (UID: \"fe85cb02-2d21-4fc3-92c1-6d060a006011\") " pod="openshift-ingress/router-default-68cf44c8b8-rscz2" Dec 08 17:44:16 crc kubenswrapper[5118]: I1208 17:44:16.086237 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-785pk\" (UniqueName: \"kubernetes.io/projected/085a3a20-9b8f-4448-a4cb-89465f57027c-kube-api-access-785pk\") pod \"packageserver-7d4fc7d867-4kjg6\" (UID: \"085a3a20-9b8f-4448-a4cb-89465f57027c\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-4kjg6" Dec 08 17:44:16 crc kubenswrapper[5118]: I1208 17:44:16.086297 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9fsgh\" (UniqueName: \"kubernetes.io/projected/f22fa87e-79cb-498c-a2ab-166d47fd70a5-kube-api-access-9fsgh\") pod \"cluster-samples-operator-6b564684c8-2cnx5\" (UID: \"f22fa87e-79cb-498c-a2ab-166d47fd70a5\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-2cnx5" Dec 08 17:44:16 crc kubenswrapper[5118]: I1208 17:44:16.086344 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1cd09f9c-6a6f-438a-a982-082edc35a55c-tmp\") pod \"cluster-image-registry-operator-86c45576b9-rwgjl\" (UID: \"1cd09f9c-6a6f-438a-a982-082edc35a55c\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-rwgjl" Dec 08 17:44:16 crc kubenswrapper[5118]: I1208 17:44:16.086390 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1cd09f9c-6a6f-438a-a982-082edc35a55c-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-rwgjl\" (UID: \"1cd09f9c-6a6f-438a-a982-082edc35a55c\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-rwgjl" Dec 08 17:44:16 crc kubenswrapper[5118]: I1208 17:44:16.086398 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/1cd09f9c-6a6f-438a-a982-082edc35a55c-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-rwgjl\" (UID: \"1cd09f9c-6a6f-438a-a982-082edc35a55c\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-rwgjl" Dec 08 17:44:16 crc kubenswrapper[5118]: I1208 17:44:16.086457 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/085a3a20-9b8f-4448-a4cb-89465f57027c-tmpfs\") pod \"packageserver-7d4fc7d867-4kjg6\" (UID: \"085a3a20-9b8f-4448-a4cb-89465f57027c\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-4kjg6" Dec 08 17:44:16 crc kubenswrapper[5118]: I1208 17:44:16.086526 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9af82654-06bc-4376-bff5-d6adacce9785-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-85wdh\" (UID: \"9af82654-06bc-4376-bff5-d6adacce9785\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-85wdh" Dec 08 17:44:16 crc kubenswrapper[5118]: I1208 17:44:16.086572 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/9a815eca-9800-4b68-adc1-5953173f4427-tmpfs\") pod \"catalog-operator-75ff9f647d-bl822\" (UID: \"9a815eca-9800-4b68-adc1-5953173f4427\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-bl822" Dec 08 17:44:16 crc kubenswrapper[5118]: I1208 17:44:16.086620 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a52d6e07-c08e-4424-8a3f-50052c311604-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-gvb6q\" (UID: \"a52d6e07-c08e-4424-8a3f-50052c311604\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-gvb6q" Dec 08 17:44:16 crc kubenswrapper[5118]: I1208 17:44:16.086680 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fe85cb02-2d21-4fc3-92c1-6d060a006011-metrics-certs\") pod \"router-default-68cf44c8b8-rscz2\" (UID: \"fe85cb02-2d21-4fc3-92c1-6d060a006011\") " pod="openshift-ingress/router-default-68cf44c8b8-rscz2" Dec 08 17:44:16 crc kubenswrapper[5118]: I1208 17:44:16.086754 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/085a3a20-9b8f-4448-a4cb-89465f57027c-apiservice-cert\") pod \"packageserver-7d4fc7d867-4kjg6\" (UID: \"085a3a20-9b8f-4448-a4cb-89465f57027c\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-4kjg6" Dec 08 17:44:16 crc kubenswrapper[5118]: I1208 17:44:16.086835 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/f22fa87e-79cb-498c-a2ab-166d47fd70a5-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-2cnx5\" (UID: \"f22fa87e-79cb-498c-a2ab-166d47fd70a5\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-2cnx5" Dec 08 17:44:16 crc kubenswrapper[5118]: I1208 17:44:16.086930 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2554c491-6bfb-47fd-9b76-c1da12e702b1-serving-cert\") pod \"service-ca-operator-5b9c976747-cdz4v\" (UID: \"2554c491-6bfb-47fd-9b76-c1da12e702b1\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-cdz4v" Dec 08 17:44:16 crc kubenswrapper[5118]: I1208 17:44:16.087020 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2mm8b\" (UniqueName: \"kubernetes.io/projected/9af82654-06bc-4376-bff5-d6adacce9785-kube-api-access-2mm8b\") pod \"marketplace-operator-547dbd544d-85wdh\" (UID: \"9af82654-06bc-4376-bff5-d6adacce9785\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-85wdh" Dec 08 17:44:16 crc kubenswrapper[5118]: I1208 17:44:16.087073 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2554c491-6bfb-47fd-9b76-c1da12e702b1-config\") pod \"service-ca-operator-5b9c976747-cdz4v\" (UID: \"2554c491-6bfb-47fd-9b76-c1da12e702b1\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-cdz4v" Dec 08 17:44:16 crc kubenswrapper[5118]: I1208 17:44:16.087127 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/1cd09f9c-6a6f-438a-a982-082edc35a55c-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-rwgjl\" (UID: \"1cd09f9c-6a6f-438a-a982-082edc35a55c\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-rwgjl" Dec 08 17:44:16 crc kubenswrapper[5118]: I1208 17:44:16.087185 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/9a815eca-9800-4b68-adc1-5953173f4427-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-bl822\" (UID: \"9a815eca-9800-4b68-adc1-5953173f4427\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-bl822" Dec 08 17:44:16 crc kubenswrapper[5118]: I1208 17:44:16.087203 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fe85cb02-2d21-4fc3-92c1-6d060a006011-service-ca-bundle\") pod \"router-default-68cf44c8b8-rscz2\" (UID: \"fe85cb02-2d21-4fc3-92c1-6d060a006011\") " pod="openshift-ingress/router-default-68cf44c8b8-rscz2" Dec 08 17:44:16 crc kubenswrapper[5118]: I1208 17:44:16.087235 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a52d6e07-c08e-4424-8a3f-50052c311604-config\") pod \"openshift-kube-scheduler-operator-54f497555d-gvb6q\" (UID: \"a52d6e07-c08e-4424-8a3f-50052c311604\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-gvb6q" Dec 08 17:44:16 crc kubenswrapper[5118]: I1208 17:44:16.087298 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a52d6e07-c08e-4424-8a3f-50052c311604-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-gvb6q\" (UID: \"a52d6e07-c08e-4424-8a3f-50052c311604\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-gvb6q" Dec 08 17:44:16 crc kubenswrapper[5118]: I1208 17:44:16.087353 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/9148080a-77e2-4847-840a-d67f837c8fbe-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-d8qsj\" (UID: \"9148080a-77e2-4847-840a-d67f837c8fbe\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-d8qsj" Dec 08 17:44:16 crc kubenswrapper[5118]: I1208 17:44:16.087417 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-pp8xf\" (UniqueName: \"kubernetes.io/projected/82728066-0204-4d71-acff-8779194a3e3c-kube-api-access-pp8xf\") pod \"migrator-866fcbc849-5pp5q\" (UID: \"82728066-0204-4d71-acff-8779194a3e3c\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-5pp5q" Dec 08 17:44:16 crc kubenswrapper[5118]: I1208 17:44:16.087469 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2drtz\" (UniqueName: \"kubernetes.io/projected/39c08b26-3404-4ffd-a53a-c86f0c654db7-kube-api-access-2drtz\") pod \"downloads-747b44746d-x7wvx\" (UID: \"39c08b26-3404-4ffd-a53a-c86f0c654db7\") " pod="openshift-console/downloads-747b44746d-x7wvx" Dec 08 17:44:16 crc kubenswrapper[5118]: I1208 17:44:16.087595 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/fe85cb02-2d21-4fc3-92c1-6d060a006011-stats-auth\") pod \"router-default-68cf44c8b8-rscz2\" (UID: \"fe85cb02-2d21-4fc3-92c1-6d060a006011\") " pod="openshift-ingress/router-default-68cf44c8b8-rscz2" Dec 08 17:44:16 crc kubenswrapper[5118]: I1208 17:44:16.089268 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/9af82654-06bc-4376-bff5-d6adacce9785-tmp\") pod \"marketplace-operator-547dbd544d-85wdh\" (UID: \"9af82654-06bc-4376-bff5-d6adacce9785\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-85wdh" Dec 08 17:44:16 crc kubenswrapper[5118]: I1208 17:44:16.089646 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/1cd09f9c-6a6f-438a-a982-082edc35a55c-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-rwgjl\" (UID: \"1cd09f9c-6a6f-438a-a982-082edc35a55c\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-rwgjl" Dec 08 17:44:16 crc kubenswrapper[5118]: I1208 17:44:16.090326 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a52d6e07-c08e-4424-8a3f-50052c311604-config\") pod \"openshift-kube-scheduler-operator-54f497555d-gvb6q\" (UID: \"a52d6e07-c08e-4424-8a3f-50052c311604\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-gvb6q" Dec 08 17:44:16 crc kubenswrapper[5118]: I1208 17:44:16.090803 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/085a3a20-9b8f-4448-a4cb-89465f57027c-tmpfs\") pod \"packageserver-7d4fc7d867-4kjg6\" (UID: \"085a3a20-9b8f-4448-a4cb-89465f57027c\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-4kjg6" Dec 08 17:44:16 crc kubenswrapper[5118]: I1208 17:44:16.090839 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/0b3a0959-d09e-4fd8-b931-d85bb42a3896-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-dhfht\" (UID: \"0b3a0959-d09e-4fd8-b931-d85bb42a3896\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-dhfht" Dec 08 17:44:16 crc kubenswrapper[5118]: I1208 17:44:16.090815 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1cd09f9c-6a6f-438a-a982-082edc35a55c-tmp\") pod \"cluster-image-registry-operator-86c45576b9-rwgjl\" (UID: \"1cd09f9c-6a6f-438a-a982-082edc35a55c\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-rwgjl" Dec 08 17:44:16 crc kubenswrapper[5118]: I1208 17:44:16.091324 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9af82654-06bc-4376-bff5-d6adacce9785-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-85wdh\" (UID: \"9af82654-06bc-4376-bff5-d6adacce9785\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-85wdh" Dec 08 17:44:16 crc kubenswrapper[5118]: I1208 17:44:16.091576 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/9a815eca-9800-4b68-adc1-5953173f4427-tmpfs\") pod \"catalog-operator-75ff9f647d-bl822\" (UID: \"9a815eca-9800-4b68-adc1-5953173f4427\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-bl822" Dec 08 17:44:16 crc kubenswrapper[5118]: I1208 17:44:16.091657 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/fe85cb02-2d21-4fc3-92c1-6d060a006011-default-certificate\") pod \"router-default-68cf44c8b8-rscz2\" (UID: \"fe85cb02-2d21-4fc3-92c1-6d060a006011\") " pod="openshift-ingress/router-default-68cf44c8b8-rscz2" Dec 08 17:44:16 crc kubenswrapper[5118]: I1208 17:44:16.091898 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2554c491-6bfb-47fd-9b76-c1da12e702b1-config\") pod \"service-ca-operator-5b9c976747-cdz4v\" (UID: \"2554c491-6bfb-47fd-9b76-c1da12e702b1\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-cdz4v" Dec 08 17:44:16 crc kubenswrapper[5118]: I1208 17:44:16.088820 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/085a3a20-9b8f-4448-a4cb-89465f57027c-webhook-cert\") pod \"packageserver-7d4fc7d867-4kjg6\" (UID: \"085a3a20-9b8f-4448-a4cb-89465f57027c\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-4kjg6" Dec 08 17:44:16 crc kubenswrapper[5118]: I1208 17:44:16.092919 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/9148080a-77e2-4847-840a-d67f837c8fbe-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-d8qsj\" (UID: \"9148080a-77e2-4847-840a-d67f837c8fbe\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-d8qsj" Dec 08 17:44:16 crc kubenswrapper[5118]: I1208 17:44:16.092959 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/fe85cb02-2d21-4fc3-92c1-6d060a006011-stats-auth\") pod \"router-default-68cf44c8b8-rscz2\" (UID: \"fe85cb02-2d21-4fc3-92c1-6d060a006011\") " pod="openshift-ingress/router-default-68cf44c8b8-rscz2" Dec 08 17:44:16 crc kubenswrapper[5118]: I1208 17:44:16.093034 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/9a815eca-9800-4b68-adc1-5953173f4427-srv-cert\") pod \"catalog-operator-75ff9f647d-bl822\" (UID: \"9a815eca-9800-4b68-adc1-5953173f4427\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-bl822" Dec 08 17:44:16 crc kubenswrapper[5118]: I1208 17:44:16.093249 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a52d6e07-c08e-4424-8a3f-50052c311604-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-gvb6q\" (UID: \"a52d6e07-c08e-4424-8a3f-50052c311604\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-gvb6q" Dec 08 17:44:16 crc kubenswrapper[5118]: I1208 17:44:16.093278 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a52d6e07-c08e-4424-8a3f-50052c311604-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-gvb6q\" (UID: \"a52d6e07-c08e-4424-8a3f-50052c311604\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-gvb6q" Dec 08 17:44:16 crc kubenswrapper[5118]: I1208 17:44:16.094464 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/1cd09f9c-6a6f-438a-a982-082edc35a55c-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-rwgjl\" (UID: \"1cd09f9c-6a6f-438a-a982-082edc35a55c\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-rwgjl" Dec 08 17:44:16 crc kubenswrapper[5118]: I1208 17:44:16.094931 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2554c491-6bfb-47fd-9b76-c1da12e702b1-serving-cert\") pod \"service-ca-operator-5b9c976747-cdz4v\" (UID: \"2554c491-6bfb-47fd-9b76-c1da12e702b1\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-cdz4v" Dec 08 17:44:16 crc kubenswrapper[5118]: I1208 17:44:16.094961 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/9af82654-06bc-4376-bff5-d6adacce9785-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-85wdh\" (UID: \"9af82654-06bc-4376-bff5-d6adacce9785\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-85wdh" Dec 08 17:44:16 crc kubenswrapper[5118]: I1208 17:44:16.095224 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/9a815eca-9800-4b68-adc1-5953173f4427-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-bl822\" (UID: \"9a815eca-9800-4b68-adc1-5953173f4427\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-bl822" Dec 08 17:44:16 crc kubenswrapper[5118]: I1208 17:44:16.097741 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"openshift-service-ca.crt\"" Dec 08 17:44:16 crc kubenswrapper[5118]: I1208 17:44:16.098368 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/085a3a20-9b8f-4448-a4cb-89465f57027c-apiservice-cert\") pod \"packageserver-7d4fc7d867-4kjg6\" (UID: \"085a3a20-9b8f-4448-a4cb-89465f57027c\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-4kjg6" Dec 08 17:44:16 crc kubenswrapper[5118]: I1208 17:44:16.099809 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fe85cb02-2d21-4fc3-92c1-6d060a006011-metrics-certs\") pod \"router-default-68cf44c8b8-rscz2\" (UID: \"fe85cb02-2d21-4fc3-92c1-6d060a006011\") " pod="openshift-ingress/router-default-68cf44c8b8-rscz2" Dec 08 17:44:16 crc kubenswrapper[5118]: I1208 17:44:16.117833 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"dns-operator-dockercfg-wbbsn\"" Dec 08 17:44:16 crc kubenswrapper[5118]: I1208 17:44:16.137964 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"metrics-tls\"" Dec 08 17:44:16 crc kubenswrapper[5118]: I1208 17:44:16.157800 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"kube-root-ca.crt\"" Dec 08 17:44:16 crc kubenswrapper[5118]: I1208 17:44:16.178117 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"installation-pull-secrets\"" Dec 08 17:44:16 crc kubenswrapper[5118]: I1208 17:44:16.196722 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-tls\"" Dec 08 17:44:16 crc kubenswrapper[5118]: I1208 17:44:16.236652 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-6w67b\"" Dec 08 17:44:16 crc kubenswrapper[5118]: I1208 17:44:16.257427 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"cluster-samples-operator-dockercfg-jmhxf\"" Dec 08 17:44:16 crc kubenswrapper[5118]: I1208 17:44:16.276773 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"openshift-service-ca.crt\"" Dec 08 17:44:16 crc kubenswrapper[5118]: I1208 17:44:16.297531 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"samples-operator-tls\"" Dec 08 17:44:16 crc kubenswrapper[5118]: I1208 17:44:16.313562 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/f22fa87e-79cb-498c-a2ab-166d47fd70a5-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-2cnx5\" (UID: \"f22fa87e-79cb-498c-a2ab-166d47fd70a5\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-2cnx5" Dec 08 17:44:16 crc kubenswrapper[5118]: I1208 17:44:16.318471 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"kube-root-ca.crt\"" Dec 08 17:44:16 crc kubenswrapper[5118]: I1208 17:44:16.337136 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-kpvmz\"" Dec 08 17:44:16 crc kubenswrapper[5118]: I1208 17:44:16.357623 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-default-metrics-tls\"" Dec 08 17:44:16 crc kubenswrapper[5118]: I1208 17:44:16.376664 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"dns-default\"" Dec 08 17:44:16 crc kubenswrapper[5118]: I1208 17:44:16.397519 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"openshift-service-ca.crt\"" Dec 08 17:44:16 crc kubenswrapper[5118]: I1208 17:44:16.417363 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"console-operator-dockercfg-kl6m8\"" Dec 08 17:44:16 crc kubenswrapper[5118]: I1208 17:44:16.425941 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-54w78" Dec 08 17:44:16 crc kubenswrapper[5118]: I1208 17:44:16.436838 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"serving-cert\"" Dec 08 17:44:16 crc kubenswrapper[5118]: I1208 17:44:16.468423 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"trusted-ca\"" Dec 08 17:44:16 crc kubenswrapper[5118]: I1208 17:44:16.477091 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"kube-root-ca.crt\"" Dec 08 17:44:16 crc kubenswrapper[5118]: I1208 17:44:16.497405 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"openshift-service-ca.crt\"" Dec 08 17:44:16 crc kubenswrapper[5118]: I1208 17:44:16.517250 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-client\"" Dec 08 17:44:16 crc kubenswrapper[5118]: I1208 17:44:16.538687 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-dockercfg-4vdnc\"" Dec 08 17:44:16 crc kubenswrapper[5118]: I1208 17:44:16.557596 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-serving-cert\"" Dec 08 17:44:16 crc kubenswrapper[5118]: I1208 17:44:16.577763 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-config\"" Dec 08 17:44:16 crc kubenswrapper[5118]: I1208 17:44:16.597366 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-ca-bundle\"" Dec 08 17:44:16 crc kubenswrapper[5118]: I1208 17:44:16.617714 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-service-ca-bundle\"" Dec 08 17:44:16 crc kubenswrapper[5118]: I1208 17:44:16.638267 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"kube-root-ca.crt\"" Dec 08 17:44:16 crc kubenswrapper[5118]: I1208 17:44:16.658570 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"console-operator-config\"" Dec 08 17:44:16 crc kubenswrapper[5118]: I1208 17:44:16.677454 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-dockercfg-8dkm8\"" Dec 08 17:44:16 crc kubenswrapper[5118]: I1208 17:44:16.697159 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"console-config\"" Dec 08 17:44:16 crc kubenswrapper[5118]: I1208 17:44:16.717780 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-oauth-config\"" Dec 08 17:44:16 crc kubenswrapper[5118]: I1208 17:44:16.737215 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"service-ca\"" Dec 08 17:44:16 crc kubenswrapper[5118]: I1208 17:44:16.768108 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"trusted-ca-bundle\"" Dec 08 17:44:16 crc kubenswrapper[5118]: I1208 17:44:16.777122 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"oauth-serving-cert\"" Dec 08 17:44:16 crc kubenswrapper[5118]: I1208 17:44:16.797755 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-serving-cert\"" Dec 08 17:44:16 crc kubenswrapper[5118]: I1208 17:44:16.817441 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-dockercfg-jcmfj\"" Dec 08 17:44:16 crc kubenswrapper[5118]: I1208 17:44:16.837729 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-config\"" Dec 08 17:44:16 crc kubenswrapper[5118]: I1208 17:44:16.856630 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-serving-cert\"" Dec 08 17:44:16 crc kubenswrapper[5118]: I1208 17:44:16.876610 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"kube-root-ca.crt\"" Dec 08 17:44:16 crc kubenswrapper[5118]: I1208 17:44:16.896815 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-service-ca.crt\"" Dec 08 17:44:16 crc kubenswrapper[5118]: I1208 17:44:16.915757 5118 request.go:752] "Waited before sending request" delay="1.001090157s" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/secrets?fieldSelector=metadata.name%3Dingress-operator-dockercfg-74nwh&limit=500&resourceVersion=0" Dec 08 17:44:16 crc kubenswrapper[5118]: I1208 17:44:16.917319 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"ingress-operator-dockercfg-74nwh\"" Dec 08 17:44:16 crc kubenswrapper[5118]: I1208 17:44:16.946252 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"trusted-ca\"" Dec 08 17:44:16 crc kubenswrapper[5118]: I1208 17:44:16.956645 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"openshift-service-ca.crt\"" Dec 08 17:44:16 crc kubenswrapper[5118]: E1208 17:44:16.975768 5118 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: failed to sync secret cache: timed out waiting for the condition Dec 08 17:44:16 crc kubenswrapper[5118]: E1208 17:44:16.975908 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/695dd41c-159e-4e22-98e5-e27fdf4296fd-serving-cert podName:695dd41c-159e-4e22-98e5-e27fdf4296fd nodeName:}" failed. No retries permitted until 2025-12-08 17:44:17.475856165 +0000 UTC m=+114.377180269 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/695dd41c-159e-4e22-98e5-e27fdf4296fd-serving-cert") pod "apiserver-9ddfb9f55-8h8fl" (UID: "695dd41c-159e-4e22-98e5-e27fdf4296fd") : failed to sync secret cache: timed out waiting for the condition Dec 08 17:44:16 crc kubenswrapper[5118]: I1208 17:44:16.977624 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"metrics-tls\"" Dec 08 17:44:16 crc kubenswrapper[5118]: I1208 17:44:16.998112 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"kube-root-ca.crt\"" Dec 08 17:44:17 crc kubenswrapper[5118]: I1208 17:44:17.017591 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"serving-cert\"" Dec 08 17:44:17 crc kubenswrapper[5118]: I1208 17:44:17.037926 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-dockercfg-2h6bs\"" Dec 08 17:44:17 crc kubenswrapper[5118]: I1208 17:44:17.057991 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"config\"" Dec 08 17:44:17 crc kubenswrapper[5118]: I1208 17:44:17.076770 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-root-ca.crt\"" Dec 08 17:44:17 crc kubenswrapper[5118]: I1208 17:44:17.097978 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"openshift-service-ca.crt\"" Dec 08 17:44:17 crc kubenswrapper[5118]: I1208 17:44:17.116986 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-router-certs\"" Dec 08 17:44:17 crc kubenswrapper[5118]: I1208 17:44:17.138038 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-session\"" Dec 08 17:44:17 crc kubenswrapper[5118]: I1208 17:44:17.157767 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-service-ca\"" Dec 08 17:44:17 crc kubenswrapper[5118]: I1208 17:44:17.176965 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-provider-selection\"" Dec 08 17:44:17 crc kubenswrapper[5118]: I1208 17:44:17.198247 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-idp-0-file-data\"" Dec 08 17:44:17 crc kubenswrapper[5118]: I1208 17:44:17.219081 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"oauth-openshift-dockercfg-d2bf2\"" Dec 08 17:44:17 crc kubenswrapper[5118]: I1208 17:44:17.238735 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"kube-root-ca.crt\"" Dec 08 17:44:17 crc kubenswrapper[5118]: I1208 17:44:17.288966 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-trusted-ca-bundle\"" Dec 08 17:44:17 crc kubenswrapper[5118]: I1208 17:44:17.289708 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-ocp-branding-template\"" Dec 08 17:44:17 crc kubenswrapper[5118]: I1208 17:44:17.297049 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-login\"" Dec 08 17:44:17 crc kubenswrapper[5118]: I1208 17:44:17.317780 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"openshift-service-ca.crt\"" Dec 08 17:44:17 crc kubenswrapper[5118]: I1208 17:44:17.338083 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-cliconfig\"" Dec 08 17:44:17 crc kubenswrapper[5118]: I1208 17:44:17.358161 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"audit\"" Dec 08 17:44:17 crc kubenswrapper[5118]: I1208 17:44:17.378069 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-error\"" Dec 08 17:44:17 crc kubenswrapper[5118]: I1208 17:44:17.397928 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-serving-cert\"" Dec 08 17:44:17 crc kubenswrapper[5118]: I1208 17:44:17.410029 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:17 crc kubenswrapper[5118]: I1208 17:44:17.410273 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 17:44:17 crc kubenswrapper[5118]: E1208 17:44:17.410310 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:33.410273844 +0000 UTC m=+130.311597968 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:17 crc kubenswrapper[5118]: E1208 17:44:17.410409 5118 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 08 17:44:17 crc kubenswrapper[5118]: I1208 17:44:17.410497 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 17:44:17 crc kubenswrapper[5118]: E1208 17:44:17.410535 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-08 17:44:33.41050311 +0000 UTC m=+130.311827254 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 08 17:44:17 crc kubenswrapper[5118]: I1208 17:44:17.410614 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 17:44:17 crc kubenswrapper[5118]: E1208 17:44:17.410630 5118 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 08 17:44:17 crc kubenswrapper[5118]: E1208 17:44:17.410722 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-08 17:44:33.410706136 +0000 UTC m=+130.312030270 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 08 17:44:17 crc kubenswrapper[5118]: I1208 17:44:17.410756 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 17:44:17 crc kubenswrapper[5118]: E1208 17:44:17.410844 5118 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 08 17:44:17 crc kubenswrapper[5118]: E1208 17:44:17.410873 5118 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 08 17:44:17 crc kubenswrapper[5118]: E1208 17:44:17.410937 5118 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 17:44:17 crc kubenswrapper[5118]: E1208 17:44:17.411018 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2025-12-08 17:44:33.410995245 +0000 UTC m=+130.312319379 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 17:44:17 crc kubenswrapper[5118]: E1208 17:44:17.411056 5118 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 08 17:44:17 crc kubenswrapper[5118]: E1208 17:44:17.411097 5118 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 08 17:44:17 crc kubenswrapper[5118]: E1208 17:44:17.411119 5118 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 17:44:17 crc kubenswrapper[5118]: E1208 17:44:17.411214 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2025-12-08 17:44:33.41119416 +0000 UTC m=+130.312518284 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 17:44:17 crc kubenswrapper[5118]: I1208 17:44:17.418731 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"openshift-service-ca.crt\"" Dec 08 17:44:17 crc kubenswrapper[5118]: I1208 17:44:17.426274 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 17:44:17 crc kubenswrapper[5118]: I1208 17:44:17.427286 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 17:44:17 crc kubenswrapper[5118]: I1208 17:44:17.427381 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 17:44:17 crc kubenswrapper[5118]: I1208 17:44:17.438525 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-config\"" Dec 08 17:44:17 crc kubenswrapper[5118]: I1208 17:44:17.466620 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"trusted-ca-bundle\"" Dec 08 17:44:17 crc kubenswrapper[5118]: I1208 17:44:17.478100 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"service-ca-bundle\"" Dec 08 17:44:17 crc kubenswrapper[5118]: I1208 17:44:17.498292 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"kube-root-ca.crt\"" Dec 08 17:44:17 crc kubenswrapper[5118]: I1208 17:44:17.512569 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/695dd41c-159e-4e22-98e5-e27fdf4296fd-serving-cert\") pod \"apiserver-9ddfb9f55-8h8fl\" (UID: \"695dd41c-159e-4e22-98e5-e27fdf4296fd\") " pod="openshift-apiserver/apiserver-9ddfb9f55-8h8fl" Dec 08 17:44:17 crc kubenswrapper[5118]: I1208 17:44:17.512624 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e666ddb1-3625-4468-9d05-21215b5041c1-metrics-certs\") pod \"network-metrics-daemon-54w78\" (UID: \"e666ddb1-3625-4468-9d05-21215b5041c1\") " pod="openshift-multus/network-metrics-daemon-54w78" Dec 08 17:44:17 crc kubenswrapper[5118]: I1208 17:44:17.517709 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-dockercfg-6tbpn\"" Dec 08 17:44:17 crc kubenswrapper[5118]: I1208 17:44:17.538703 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"serving-cert\"" Dec 08 17:44:17 crc kubenswrapper[5118]: I1208 17:44:17.557364 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"openshift-service-ca.crt\"" Dec 08 17:44:17 crc kubenswrapper[5118]: I1208 17:44:17.578520 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"openshift-config-operator-dockercfg-sjn6s\"" Dec 08 17:44:17 crc kubenswrapper[5118]: I1208 17:44:17.598421 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"kube-root-ca.crt\"" Dec 08 17:44:17 crc kubenswrapper[5118]: I1208 17:44:17.618674 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"config-operator-serving-cert\"" Dec 08 17:44:17 crc kubenswrapper[5118]: I1208 17:44:17.637859 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-tls\"" Dec 08 17:44:17 crc kubenswrapper[5118]: I1208 17:44:17.657805 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"node-bootstrapper-token\"" Dec 08 17:44:17 crc kubenswrapper[5118]: I1208 17:44:17.677244 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-dockercfg-dzw6b\"" Dec 08 17:44:17 crc kubenswrapper[5118]: I1208 17:44:17.698226 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-sysctl-allowlist\"" Dec 08 17:44:17 crc kubenswrapper[5118]: I1208 17:44:17.718065 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"hostpath-provisioner\"/\"csi-hostpath-provisioner-sa-dockercfg-7dcws\"" Dec 08 17:44:17 crc kubenswrapper[5118]: I1208 17:44:17.737837 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"kube-root-ca.crt\"" Dec 08 17:44:17 crc kubenswrapper[5118]: I1208 17:44:17.758570 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"openshift-service-ca.crt\"" Dec 08 17:44:17 crc kubenswrapper[5118]: I1208 17:44:17.810568 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/28b33fd8-46b7-46e9-bef9-ec6b3f035300-kube-api-access\") pod \"kube-apiserver-operator-575994946d-bhk9x\" (UID: \"28b33fd8-46b7-46e9-bef9-ec6b3f035300\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-bhk9x" Dec 08 17:44:17 crc kubenswrapper[5118]: I1208 17:44:17.822585 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lhvr7\" (UniqueName: \"kubernetes.io/projected/1a749ad3-837c-4804-b23c-2abb017b5b82-kube-api-access-lhvr7\") pod \"machine-api-operator-755bb95488-5httz\" (UID: \"1a749ad3-837c-4804-b23c-2abb017b5b82\") " pod="openshift-machine-api/machine-api-operator-755bb95488-5httz" Dec 08 17:44:17 crc kubenswrapper[5118]: I1208 17:44:17.838669 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wb2r\" (UniqueName: \"kubernetes.io/projected/837f85a8-fff5-46a0-b1d5-2d51271f415a-kube-api-access-8wb2r\") pod \"openshift-apiserver-operator-846cbfc458-q6lj7\" (UID: \"837f85a8-fff5-46a0-b1d5-2d51271f415a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-q6lj7" Dec 08 17:44:17 crc kubenswrapper[5118]: I1208 17:44:17.856254 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwkwh\" (UniqueName: \"kubernetes.io/projected/f5c1e280-e9c9-4a30-bb13-023852fd940b-kube-api-access-gwkwh\") pod \"multus-admission-controller-69db94689b-v9sxk\" (UID: \"f5c1e280-e9c9-4a30-bb13-023852fd940b\") " pod="openshift-multus/multus-admission-controller-69db94689b-v9sxk" Dec 08 17:44:17 crc kubenswrapper[5118]: I1208 17:44:17.875855 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mb6n9\" (UniqueName: \"kubernetes.io/projected/2ecc2ce3-fe03-4f16-9dfd-4a8b1b2b224f-kube-api-access-mb6n9\") pod \"machine-config-operator-67c9d58cbb-4g75z\" (UID: \"2ecc2ce3-fe03-4f16-9dfd-4a8b1b2b224f\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-4g75z" Dec 08 17:44:17 crc kubenswrapper[5118]: I1208 17:44:17.884161 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-q6lj7" Dec 08 17:44:17 crc kubenswrapper[5118]: I1208 17:44:17.903102 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwvsf\" (UniqueName: \"kubernetes.io/projected/3a9ac21c-f3fb-42c7-a5ce-096d015b8d3c-kube-api-access-bwvsf\") pod \"apiserver-8596bd845d-rdv9c\" (UID: \"3a9ac21c-f3fb-42c7-a5ce-096d015b8d3c\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-rdv9c" Dec 08 17:44:17 crc kubenswrapper[5118]: I1208 17:44:17.920506 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2t9zh\" (UniqueName: \"kubernetes.io/projected/32bb589d-b6b8-4ab2-a9a2-5bae968bd2c6-kube-api-access-2t9zh\") pod \"route-controller-manager-776cdc94d6-qkg2q\" (UID: \"32bb589d-b6b8-4ab2-a9a2-5bae968bd2c6\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-qkg2q" Dec 08 17:44:17 crc kubenswrapper[5118]: I1208 17:44:17.935586 5118 request.go:752] "Waited before sending request" delay="1.951693167s" reason="client-side throttling, not priority and fairness" verb="POST" URL="https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/serviceaccounts/openshift-controller-manager-sa/token" Dec 08 17:44:17 crc kubenswrapper[5118]: I1208 17:44:17.937688 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-nsfwp\" (UniqueName: \"kubernetes.io/projected/695dd41c-159e-4e22-98e5-e27fdf4296fd-kube-api-access-nsfwp\") pod \"apiserver-9ddfb9f55-8h8fl\" (UID: \"695dd41c-159e-4e22-98e5-e27fdf4296fd\") " pod="openshift-apiserver/apiserver-9ddfb9f55-8h8fl" Dec 08 17:44:17 crc kubenswrapper[5118]: I1208 17:44:17.948702 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-qkg2q" Dec 08 17:44:17 crc kubenswrapper[5118]: I1208 17:44:17.952735 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lkd6h\" (UniqueName: \"kubernetes.io/projected/8dcd2702-e20f-439b-b2c7-27095126b87e-kube-api-access-lkd6h\") pod \"controller-manager-65b6cccf98-6wjgz\" (UID: \"8dcd2702-e20f-439b-b2c7-27095126b87e\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-6wjgz" Dec 08 17:44:17 crc kubenswrapper[5118]: I1208 17:44:17.954610 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-8596bd845d-rdv9c" Dec 08 17:44:17 crc kubenswrapper[5118]: I1208 17:44:17.966698 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-755bb95488-5httz" Dec 08 17:44:17 crc kubenswrapper[5118]: I1208 17:44:17.971310 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-tcp6m\" (UniqueName: \"kubernetes.io/projected/742843af-c521-4d4a-beea-e6feae8140e1-kube-api-access-tcp6m\") pod \"collect-profiles-29420250-qhrfp\" (UID: \"742843af-c521-4d4a-beea-e6feae8140e1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420250-qhrfp" Dec 08 17:44:17 crc kubenswrapper[5118]: I1208 17:44:17.977759 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-9pgs7\"" Dec 08 17:44:17 crc kubenswrapper[5118]: I1208 17:44:17.979605 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-bhk9x" Dec 08 17:44:17 crc kubenswrapper[5118]: I1208 17:44:17.993765 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-4g75z" Dec 08 17:44:17 crc kubenswrapper[5118]: I1208 17:44:17.994783 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-69db94689b-v9sxk" Dec 08 17:44:17 crc kubenswrapper[5118]: I1208 17:44:17.997756 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"canary-serving-cert\"" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.017403 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"openshift-service-ca.crt\"" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.060178 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"kube-root-ca.crt\"" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.102722 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rt495\" (UniqueName: \"kubernetes.io/projected/9a815eca-9800-4b68-adc1-5953173f4427-kube-api-access-rt495\") pod \"catalog-operator-75ff9f647d-bl822\" (UID: \"9a815eca-9800-4b68-adc1-5953173f4427\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-bl822" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.115530 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-txll4\" (UniqueName: \"kubernetes.io/projected/9148080a-77e2-4847-840a-d67f837c8fbe-kube-api-access-txll4\") pod \"package-server-manager-77f986bd66-d8qsj\" (UID: \"9148080a-77e2-4847-840a-d67f837c8fbe\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-d8qsj" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.120421 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-d8qsj" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.133448 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1cd09f9c-6a6f-438a-a982-082edc35a55c-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-rwgjl\" (UID: \"1cd09f9c-6a6f-438a-a982-082edc35a55c\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-rwgjl" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.156995 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a52d6e07-c08e-4424-8a3f-50052c311604-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-gvb6q\" (UID: \"a52d6e07-c08e-4424-8a3f-50052c311604\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-gvb6q" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.171234 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-6wjgz" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.171709 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vp59p\" (UniqueName: \"kubernetes.io/projected/0b3a0959-d09e-4fd8-b931-d85bb42a3896-kube-api-access-vp59p\") pod \"control-plane-machine-set-operator-75ffdb6fcd-dhfht\" (UID: \"0b3a0959-d09e-4fd8-b931-d85bb42a3896\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-dhfht" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.196860 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lpbb7\" (UniqueName: \"kubernetes.io/projected/2554c491-6bfb-47fd-9b76-c1da12e702b1-kube-api-access-lpbb7\") pod \"service-ca-operator-5b9c976747-cdz4v\" (UID: \"2554c491-6bfb-47fd-9b76-c1da12e702b1\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-cdz4v" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.217736 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2drtz\" (UniqueName: \"kubernetes.io/projected/39c08b26-3404-4ffd-a53a-c86f0c654db7-kube-api-access-2drtz\") pod \"downloads-747b44746d-x7wvx\" (UID: \"39c08b26-3404-4ffd-a53a-c86f0c654db7\") " pod="openshift-console/downloads-747b44746d-x7wvx" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.225284 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29420250-qhrfp" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.234014 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5fr6z\" (UniqueName: \"kubernetes.io/projected/fe85cb02-2d21-4fc3-92c1-6d060a006011-kube-api-access-5fr6z\") pod \"router-default-68cf44c8b8-rscz2\" (UID: \"fe85cb02-2d21-4fc3-92c1-6d060a006011\") " pod="openshift-ingress/router-default-68cf44c8b8-rscz2" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.264981 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-785pk\" (UniqueName: \"kubernetes.io/projected/085a3a20-9b8f-4448-a4cb-89465f57027c-kube-api-access-785pk\") pod \"packageserver-7d4fc7d867-4kjg6\" (UID: \"085a3a20-9b8f-4448-a4cb-89465f57027c\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-4kjg6" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.272557 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-rdv9c"] Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.273964 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9fsgh\" (UniqueName: \"kubernetes.io/projected/f22fa87e-79cb-498c-a2ab-166d47fd70a5-kube-api-access-9fsgh\") pod \"cluster-samples-operator-6b564684c8-2cnx5\" (UID: \"f22fa87e-79cb-498c-a2ab-166d47fd70a5\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-2cnx5" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.297236 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mm8b\" (UniqueName: \"kubernetes.io/projected/9af82654-06bc-4376-bff5-d6adacce9785-kube-api-access-2mm8b\") pod \"marketplace-operator-547dbd544d-85wdh\" (UID: \"9af82654-06bc-4376-bff5-d6adacce9785\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-85wdh" Dec 08 17:44:18 crc kubenswrapper[5118]: W1208 17:44:18.302107 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3a9ac21c_f3fb_42c7_a5ce_096d015b8d3c.slice/crio-816097fb59ca861735960fa8c873f9eb211a033a3d03ed41d1527d9856c3c611 WatchSource:0}: Error finding container 816097fb59ca861735960fa8c873f9eb211a033a3d03ed41d1527d9856c3c611: Status 404 returned error can't find the container with id 816097fb59ca861735960fa8c873f9eb211a033a3d03ed41d1527d9856c3c611 Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.307100 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-85wdh" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.318264 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-pp8xf\" (UniqueName: \"kubernetes.io/projected/82728066-0204-4d71-acff-8779194a3e3c-kube-api-access-pp8xf\") pod \"migrator-866fcbc849-5pp5q\" (UID: \"82728066-0204-4d71-acff-8779194a3e3c\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-5pp5q" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.332313 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-5pp5q" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.334829 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7qlx\" (UniqueName: \"kubernetes.io/projected/1cd09f9c-6a6f-438a-a982-082edc35a55c-kube-api-access-m7qlx\") pod \"cluster-image-registry-operator-86c45576b9-rwgjl\" (UID: \"1cd09f9c-6a6f-438a-a982-082edc35a55c\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-rwgjl" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.349071 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-68cf44c8b8-rscz2" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.349226 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-dhfht" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.354923 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-bl822" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.357145 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-t8n29\"" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.361973 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-4kjg6" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.362430 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/163e109f-c588-4057-a961-86bcca55948f-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-6lgwk\" (UID: \"163e109f-c588-4057-a961-86bcca55948f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-6lgwk" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.362469 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwj8f\" (UniqueName: \"kubernetes.io/projected/c987ac4d-5129-45aa-afe4-ab42b6907462-kube-api-access-zwj8f\") pod \"olm-operator-5cdf44d969-ggh59\" (UID: \"c987ac4d-5129-45aa-afe4-ab42b6907462\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-ggh59" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.363085 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc-ca-trust-extracted\") pod \"image-registry-66587d64c8-s6hn4\" (UID: \"1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc\") " pod="openshift-image-registry/image-registry-66587d64c8-s6hn4" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.363126 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pw7lm\" (UniqueName: \"kubernetes.io/projected/78316998-7ca1-4495-997b-bad16252fa84-kube-api-access-pw7lm\") pod \"machine-config-controller-f9cdd68f7-p88k2\" (UID: \"78316998-7ca1-4495-997b-bad16252fa84\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-p88k2" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.363150 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c987ac4d-5129-45aa-afe4-ab42b6907462-srv-cert\") pod \"olm-operator-5cdf44d969-ggh59\" (UID: \"c987ac4d-5129-45aa-afe4-ab42b6907462\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-ggh59" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.363174 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92b6ea75-6b68-454a-855f-958a2bf6150b-config\") pod \"machine-approver-54c688565-487qx\" (UID: \"92b6ea75-6b68-454a-855f-958a2bf6150b\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-487qx" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.363198 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6be72eaf-a179-4e2b-a12d-4b5dbb213183-metrics-tls\") pod \"dns-operator-799b87ffcd-9b988\" (UID: \"6be72eaf-a179-4e2b-a12d-4b5dbb213183\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-9b988" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.363251 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/163e109f-c588-4057-a961-86bcca55948f-config\") pod \"kube-controller-manager-operator-69d5f845f8-6lgwk\" (UID: \"163e109f-c588-4057-a961-86bcca55948f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-6lgwk" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.363270 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ada44265-dcab-408c-843e-e5c5a45aa138-signing-cabundle\") pod \"service-ca-74545575db-d69qv\" (UID: \"ada44265-dcab-408c-843e-e5c5a45aa138\") " pod="openshift-service-ca/service-ca-74545575db-d69qv" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.363315 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdlcd\" (UniqueName: \"kubernetes.io/projected/92b6ea75-6b68-454a-855f-958a2bf6150b-kube-api-access-mdlcd\") pod \"machine-approver-54c688565-487qx\" (UID: \"92b6ea75-6b68-454a-855f-958a2bf6150b\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-487qx" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.363364 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pflth\" (UniqueName: \"kubernetes.io/projected/6be72eaf-a179-4e2b-a12d-4b5dbb213183-kube-api-access-pflth\") pod \"dns-operator-799b87ffcd-9b988\" (UID: \"6be72eaf-a179-4e2b-a12d-4b5dbb213183\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-9b988" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.363386 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc-registry-tls\") pod \"image-registry-66587d64c8-s6hn4\" (UID: \"1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc\") " pod="openshift-image-registry/image-registry-66587d64c8-s6hn4" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.363427 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc-installation-pull-secrets\") pod \"image-registry-66587d64c8-s6hn4\" (UID: \"1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc\") " pod="openshift-image-registry/image-registry-66587d64c8-s6hn4" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.363457 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-s6hn4\" (UID: \"1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc\") " pod="openshift-image-registry/image-registry-66587d64c8-s6hn4" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.363473 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhskb\" (UniqueName: \"kubernetes.io/projected/1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc-kube-api-access-dhskb\") pod \"image-registry-66587d64c8-s6hn4\" (UID: \"1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc\") " pod="openshift-image-registry/image-registry-66587d64c8-s6hn4" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.363498 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/163e109f-c588-4057-a961-86bcca55948f-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-6lgwk\" (UID: \"163e109f-c588-4057-a961-86bcca55948f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-6lgwk" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.364163 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc-bound-sa-token\") pod \"image-registry-66587d64c8-s6hn4\" (UID: \"1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc\") " pod="openshift-image-registry/image-registry-66587d64c8-s6hn4" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.364195 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjjgh\" (UniqueName: \"kubernetes.io/projected/ada44265-dcab-408c-843e-e5c5a45aa138-kube-api-access-wjjgh\") pod \"service-ca-74545575db-d69qv\" (UID: \"ada44265-dcab-408c-843e-e5c5a45aa138\") " pod="openshift-service-ca/service-ca-74545575db-d69qv" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.364213 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c987ac4d-5129-45aa-afe4-ab42b6907462-profile-collector-cert\") pod \"olm-operator-5cdf44d969-ggh59\" (UID: \"c987ac4d-5129-45aa-afe4-ab42b6907462\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-ggh59" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.364230 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/92b6ea75-6b68-454a-855f-958a2bf6150b-auth-proxy-config\") pod \"machine-approver-54c688565-487qx\" (UID: \"92b6ea75-6b68-454a-855f-958a2bf6150b\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-487qx" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.364260 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc-registry-certificates\") pod \"image-registry-66587d64c8-s6hn4\" (UID: \"1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc\") " pod="openshift-image-registry/image-registry-66587d64c8-s6hn4" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.364275 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ada44265-dcab-408c-843e-e5c5a45aa138-signing-key\") pod \"service-ca-74545575db-d69qv\" (UID: \"ada44265-dcab-408c-843e-e5c5a45aa138\") " pod="openshift-service-ca/service-ca-74545575db-d69qv" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.364296 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/6be72eaf-a179-4e2b-a12d-4b5dbb213183-tmp-dir\") pod \"dns-operator-799b87ffcd-9b988\" (UID: \"6be72eaf-a179-4e2b-a12d-4b5dbb213183\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-9b988" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.364316 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/78316998-7ca1-4495-997b-bad16252fa84-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-p88k2\" (UID: \"78316998-7ca1-4495-997b-bad16252fa84\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-p88k2" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.364352 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/92b6ea75-6b68-454a-855f-958a2bf6150b-machine-approver-tls\") pod \"machine-approver-54c688565-487qx\" (UID: \"92b6ea75-6b68-454a-855f-958a2bf6150b\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-487qx" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.364373 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/163e109f-c588-4057-a961-86bcca55948f-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-6lgwk\" (UID: \"163e109f-c588-4057-a961-86bcca55948f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-6lgwk" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.364395 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc-trusted-ca\") pod \"image-registry-66587d64c8-s6hn4\" (UID: \"1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc\") " pod="openshift-image-registry/image-registry-66587d64c8-s6hn4" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.364410 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/c987ac4d-5129-45aa-afe4-ab42b6907462-tmpfs\") pod \"olm-operator-5cdf44d969-ggh59\" (UID: \"c987ac4d-5129-45aa-afe4-ab42b6907462\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-ggh59" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.364435 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/78316998-7ca1-4495-997b-bad16252fa84-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-p88k2\" (UID: \"78316998-7ca1-4495-997b-bad16252fa84\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-p88k2" Dec 08 17:44:18 crc kubenswrapper[5118]: E1208 17:44:18.365348 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:18.865331465 +0000 UTC m=+115.766655559 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-s6hn4" (UID: "1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.377140 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-cdz4v" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.378940 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.391309 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e666ddb1-3625-4468-9d05-21215b5041c1-metrics-certs\") pod \"network-metrics-daemon-54w78\" (UID: \"e666ddb1-3625-4468-9d05-21215b5041c1\") " pod="openshift-multus/network-metrics-daemon-54w78" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.394701 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-v9sxk"] Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.403056 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"serving-cert\"" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.407322 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/695dd41c-159e-4e22-98e5-e27fdf4296fd-serving-cert\") pod \"apiserver-9ddfb9f55-8h8fl\" (UID: \"695dd41c-159e-4e22-98e5-e27fdf4296fd\") " pod="openshift-apiserver/apiserver-9ddfb9f55-8h8fl" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.414256 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-gvb6q" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.415182 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-6wjgz"] Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.425922 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"networking-console-plugin-cert\"" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.436825 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-d8qsj"] Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.436946 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-console\"/\"networking-console-plugin\"" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.441499 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29420250-qhrfp"] Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.442610 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-747b44746d-x7wvx" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.451643 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-rwgjl" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.458760 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.458771 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-9ddfb9f55-8h8fl" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.465286 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.465432 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6be72eaf-a179-4e2b-a12d-4b5dbb213183-metrics-tls\") pod \"dns-operator-799b87ffcd-9b988\" (UID: \"6be72eaf-a179-4e2b-a12d-4b5dbb213183\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-9b988" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.465464 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e-audit-dir\") pod \"oauth-openshift-66458b6674-ztdrc\" (UID: \"9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e\") " pod="openshift-authentication/oauth-openshift-66458b6674-ztdrc" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.465484 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-ztdrc\" (UID: \"9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e\") " pod="openshift-authentication/oauth-openshift-66458b6674-ztdrc" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.465512 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4r4sr\" (UniqueName: \"kubernetes.io/projected/b81b63fd-c7d6-4446-ab93-c62912586002-kube-api-access-4r4sr\") pod \"csi-hostpathplugin-qrls7\" (UID: \"b81b63fd-c7d6-4446-ab93-c62912586002\") " pod="hostpath-provisioner/csi-hostpathplugin-qrls7" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.465531 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mdlcd\" (UniqueName: \"kubernetes.io/projected/92b6ea75-6b68-454a-855f-958a2bf6150b-kube-api-access-mdlcd\") pod \"machine-approver-54c688565-487qx\" (UID: \"92b6ea75-6b68-454a-855f-958a2bf6150b\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-487qx" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.465546 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/8ffbd09a-f2fa-4983-84c6-c6db6ccaac49-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-bdhnb\" (UID: \"8ffbd09a-f2fa-4983-84c6-c6db6ccaac49\") " pod="openshift-multus/cni-sysctl-allowlist-ds-bdhnb" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.465562 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f90a7a2-721d-4929-a4fa-fd1d2019b4cd-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-m5ltz\" (UID: \"0f90a7a2-721d-4929-a4fa-fd1d2019b4cd\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-m5ltz" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.465582 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dbad8204-9790-4f15-a74c-0149d19a4785-config\") pod \"kube-storage-version-migrator-operator-565b79b866-6gkgz\" (UID: \"dbad8204-9790-4f15-a74c-0149d19a4785\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-6gkgz" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.465607 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c48eb41-252c-441b-9506-329d9f6b0371-serving-cert\") pod \"authentication-operator-7f5c659b84-5scww\" (UID: \"4c48eb41-252c-441b-9506-329d9f6b0371\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-5scww" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.465638 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jb8vr\" (UniqueName: \"kubernetes.io/projected/9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e-kube-api-access-jb8vr\") pod \"oauth-openshift-66458b6674-ztdrc\" (UID: \"9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e\") " pod="openshift-authentication/oauth-openshift-66458b6674-ztdrc" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.465683 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-ztdrc\" (UID: \"9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e\") " pod="openshift-authentication/oauth-openshift-66458b6674-ztdrc" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.465701 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-ztdrc\" (UID: \"9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e\") " pod="openshift-authentication/oauth-openshift-66458b6674-ztdrc" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.465727 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2e8b3e0b-d963-4522-9a08-71aee0979479-serving-cert\") pod \"console-operator-67c89758df-79mps\" (UID: \"2e8b3e0b-d963-4522-9a08-71aee0979479\") " pod="openshift-console-operator/console-operator-67c89758df-79mps" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.465744 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc-installation-pull-secrets\") pod \"image-registry-66587d64c8-s6hn4\" (UID: \"1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc\") " pod="openshift-image-registry/image-registry-66587d64c8-s6hn4" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.465762 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dhskb\" (UniqueName: \"kubernetes.io/projected/1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc-kube-api-access-dhskb\") pod \"image-registry-66587d64c8-s6hn4\" (UID: \"1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc\") " pod="openshift-image-registry/image-registry-66587d64c8-s6hn4" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.465777 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0f90a7a2-721d-4929-a4fa-fd1d2019b4cd-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-m5ltz\" (UID: \"0f90a7a2-721d-4929-a4fa-fd1d2019b4cd\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-m5ltz" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.465807 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/1bd2df11-789d-4a3f-a7c4-2d6afbe38d0f-etcd-ca\") pod \"etcd-operator-69b85846b6-k26tc\" (UID: \"1bd2df11-789d-4a3f-a7c4-2d6afbe38d0f\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-k26tc" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.465825 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/6be72eaf-a179-4e2b-a12d-4b5dbb213183-tmp-dir\") pod \"dns-operator-799b87ffcd-9b988\" (UID: \"6be72eaf-a179-4e2b-a12d-4b5dbb213183\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-9b988" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.465842 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kmh9\" (UniqueName: \"kubernetes.io/projected/8ffbd09a-f2fa-4983-84c6-c6db6ccaac49-kube-api-access-7kmh9\") pod \"cni-sysctl-allowlist-ds-bdhnb\" (UID: \"8ffbd09a-f2fa-4983-84c6-c6db6ccaac49\") " pod="openshift-multus/cni-sysctl-allowlist-ds-bdhnb" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.465857 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/d549986a-81c9-4cd0-86b0-61e4b6700ddf-certs\") pod \"machine-config-server-psb45\" (UID: \"d549986a-81c9-4cd0-86b0-61e4b6700ddf\") " pod="openshift-machine-config-operator/machine-config-server-psb45" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.465897 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a272b1fd-864b-4107-a4fd-6f6ab82a1d34-trusted-ca-bundle\") pod \"console-64d44f6ddf-dhfvx\" (UID: \"a272b1fd-864b-4107-a4fd-6f6ab82a1d34\") " pod="openshift-console/console-64d44f6ddf-dhfvx" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.465917 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wjjgh\" (UniqueName: \"kubernetes.io/projected/ada44265-dcab-408c-843e-e5c5a45aa138-kube-api-access-wjjgh\") pod \"service-ca-74545575db-d69qv\" (UID: \"ada44265-dcab-408c-843e-e5c5a45aa138\") " pod="openshift-service-ca/service-ca-74545575db-d69qv" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.465956 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ada44265-dcab-408c-843e-e5c5a45aa138-signing-key\") pod \"service-ca-74545575db-d69qv\" (UID: \"ada44265-dcab-408c-843e-e5c5a45aa138\") " pod="openshift-service-ca/service-ca-74545575db-d69qv" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.465975 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqx7r\" (UniqueName: \"kubernetes.io/projected/dbad8204-9790-4f15-a74c-0149d19a4785-kube-api-access-jqx7r\") pod \"kube-storage-version-migrator-operator-565b79b866-6gkgz\" (UID: \"dbad8204-9790-4f15-a74c-0149d19a4785\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-6gkgz" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.465990 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-ztdrc\" (UID: \"9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e\") " pod="openshift-authentication/oauth-openshift-66458b6674-ztdrc" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.466006 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1125cbf4-59e9-464e-8305-d2fc133ae675-config-volume\") pod \"dns-default-c5tbq\" (UID: \"1125cbf4-59e9-464e-8305-d2fc133ae675\") " pod="openshift-dns/dns-default-c5tbq" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.466023 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c48eb41-252c-441b-9506-329d9f6b0371-config\") pod \"authentication-operator-7f5c659b84-5scww\" (UID: \"4c48eb41-252c-441b-9506-329d9f6b0371\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-5scww" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.466039 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4c48eb41-252c-441b-9506-329d9f6b0371-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-5scww\" (UID: \"4c48eb41-252c-441b-9506-329d9f6b0371\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-5scww" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.466054 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b81b63fd-c7d6-4446-ab93-c62912586002-socket-dir\") pod \"csi-hostpathplugin-qrls7\" (UID: \"b81b63fd-c7d6-4446-ab93-c62912586002\") " pod="hostpath-provisioner/csi-hostpathplugin-qrls7" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.466069 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/b81b63fd-c7d6-4446-ab93-c62912586002-csi-data-dir\") pod \"csi-hostpathplugin-qrls7\" (UID: \"b81b63fd-c7d6-4446-ab93-c62912586002\") " pod="hostpath-provisioner/csi-hostpathplugin-qrls7" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.466090 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/78316998-7ca1-4495-997b-bad16252fa84-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-p88k2\" (UID: \"78316998-7ca1-4495-997b-bad16252fa84\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-p88k2" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.466114 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/8ffbd09a-f2fa-4983-84c6-c6db6ccaac49-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-bdhnb\" (UID: \"8ffbd09a-f2fa-4983-84c6-c6db6ccaac49\") " pod="openshift-multus/cni-sysctl-allowlist-ds-bdhnb" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.466160 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kp6j7\" (UniqueName: \"kubernetes.io/projected/a272b1fd-864b-4107-a4fd-6f6ab82a1d34-kube-api-access-kp6j7\") pod \"console-64d44f6ddf-dhfvx\" (UID: \"a272b1fd-864b-4107-a4fd-6f6ab82a1d34\") " pod="openshift-console/console-64d44f6ddf-dhfvx" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.466179 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a272b1fd-864b-4107-a4fd-6f6ab82a1d34-oauth-serving-cert\") pod \"console-64d44f6ddf-dhfvx\" (UID: \"a272b1fd-864b-4107-a4fd-6f6ab82a1d34\") " pod="openshift-console/console-64d44f6ddf-dhfvx" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.466194 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-ztdrc\" (UID: \"9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e\") " pod="openshift-authentication/oauth-openshift-66458b6674-ztdrc" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.466213 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/78316998-7ca1-4495-997b-bad16252fa84-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-p88k2\" (UID: \"78316998-7ca1-4495-997b-bad16252fa84\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-p88k2" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.466231 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0157c9d2-3779-46c8-9da9-1fffa52986a6-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-2pwhz\" (UID: \"0157c9d2-3779-46c8-9da9-1fffa52986a6\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-2pwhz" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.466260 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/163e109f-c588-4057-a961-86bcca55948f-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-6lgwk\" (UID: \"163e109f-c588-4057-a961-86bcca55948f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-6lgwk" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.466277 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zwj8f\" (UniqueName: \"kubernetes.io/projected/c987ac4d-5129-45aa-afe4-ab42b6907462-kube-api-access-zwj8f\") pod \"olm-operator-5cdf44d969-ggh59\" (UID: \"c987ac4d-5129-45aa-afe4-ab42b6907462\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-ggh59" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.466293 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dq99\" (UniqueName: \"kubernetes.io/projected/4c48eb41-252c-441b-9506-329d9f6b0371-kube-api-access-5dq99\") pod \"authentication-operator-7f5c659b84-5scww\" (UID: \"4c48eb41-252c-441b-9506-329d9f6b0371\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-5scww" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.466309 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-ztdrc\" (UID: \"9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e\") " pod="openshift-authentication/oauth-openshift-66458b6674-ztdrc" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.466325 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6cbk\" (UniqueName: \"kubernetes.io/projected/d549986a-81c9-4cd0-86b0-61e4b6700ddf-kube-api-access-t6cbk\") pod \"machine-config-server-psb45\" (UID: \"d549986a-81c9-4cd0-86b0-61e4b6700ddf\") " pod="openshift-machine-config-operator/machine-config-server-psb45" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.466366 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc-ca-trust-extracted\") pod \"image-registry-66587d64c8-s6hn4\" (UID: \"1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc\") " pod="openshift-image-registry/image-registry-66587d64c8-s6hn4" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.466738 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc-ca-trust-extracted\") pod \"image-registry-66587d64c8-s6hn4\" (UID: \"1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc\") " pod="openshift-image-registry/image-registry-66587d64c8-s6hn4" Dec 08 17:44:18 crc kubenswrapper[5118]: E1208 17:44:18.467270 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:18.967243266 +0000 UTC m=+115.868567380 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.470627 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ada44265-dcab-408c-843e-e5c5a45aa138-signing-key\") pod \"service-ca-74545575db-d69qv\" (UID: \"ada44265-dcab-408c-843e-e5c5a45aa138\") " pod="openshift-service-ca/service-ca-74545575db-d69qv" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.471179 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/78316998-7ca1-4495-997b-bad16252fa84-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-p88k2\" (UID: \"78316998-7ca1-4495-997b-bad16252fa84\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-p88k2" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.471294 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c987ac4d-5129-45aa-afe4-ab42b6907462-srv-cert\") pod \"olm-operator-5cdf44d969-ggh59\" (UID: \"c987ac4d-5129-45aa-afe4-ab42b6907462\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-ggh59" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.471303 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/6be72eaf-a179-4e2b-a12d-4b5dbb213183-tmp-dir\") pod \"dns-operator-799b87ffcd-9b988\" (UID: \"6be72eaf-a179-4e2b-a12d-4b5dbb213183\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-9b988" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.471629 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6be72eaf-a179-4e2b-a12d-4b5dbb213183-metrics-tls\") pod \"dns-operator-799b87ffcd-9b988\" (UID: \"6be72eaf-a179-4e2b-a12d-4b5dbb213183\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-9b988" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.471682 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-pw7lm\" (UniqueName: \"kubernetes.io/projected/78316998-7ca1-4495-997b-bad16252fa84-kube-api-access-pw7lm\") pod \"machine-config-controller-f9cdd68f7-p88k2\" (UID: \"78316998-7ca1-4495-997b-bad16252fa84\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-p88k2" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.471742 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a272b1fd-864b-4107-a4fd-6f6ab82a1d34-console-serving-cert\") pod \"console-64d44f6ddf-dhfvx\" (UID: \"a272b1fd-864b-4107-a4fd-6f6ab82a1d34\") " pod="openshift-console/console-64d44f6ddf-dhfvx" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.471783 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a272b1fd-864b-4107-a4fd-6f6ab82a1d34-service-ca\") pod \"console-64d44f6ddf-dhfvx\" (UID: \"a272b1fd-864b-4107-a4fd-6f6ab82a1d34\") " pod="openshift-console/console-64d44f6ddf-dhfvx" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.471818 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/163e109f-c588-4057-a961-86bcca55948f-config\") pod \"kube-controller-manager-operator-69d5f845f8-6lgwk\" (UID: \"163e109f-c588-4057-a961-86bcca55948f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-6lgwk" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.471837 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ada44265-dcab-408c-843e-e5c5a45aa138-signing-cabundle\") pod \"service-ca-74545575db-d69qv\" (UID: \"ada44265-dcab-408c-843e-e5c5a45aa138\") " pod="openshift-service-ca/service-ca-74545575db-d69qv" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.471858 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6nng\" (UniqueName: \"kubernetes.io/projected/ceb6ea27-6be6-4eb2-8f56-d8ddfa3f0b0b-kube-api-access-b6nng\") pod \"openshift-config-operator-5777786469-v69x6\" (UID: \"ceb6ea27-6be6-4eb2-8f56-d8ddfa3f0b0b\") " pod="openshift-config-operator/openshift-config-operator-5777786469-v69x6" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.471981 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/ceb6ea27-6be6-4eb2-8f56-d8ddfa3f0b0b-available-featuregates\") pod \"openshift-config-operator-5777786469-v69x6\" (UID: \"ceb6ea27-6be6-4eb2-8f56-d8ddfa3f0b0b\") " pod="openshift-config-operator/openshift-config-operator-5777786469-v69x6" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.472004 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bd2df11-789d-4a3f-a7c4-2d6afbe38d0f-serving-cert\") pod \"etcd-operator-69b85846b6-k26tc\" (UID: \"1bd2df11-789d-4a3f-a7c4-2d6afbe38d0f\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-k26tc" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.472019 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0157c9d2-3779-46c8-9da9-1fffa52986a6-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-2pwhz\" (UID: \"0157c9d2-3779-46c8-9da9-1fffa52986a6\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-2pwhz" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.472069 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/b81b63fd-c7d6-4446-ab93-c62912586002-plugins-dir\") pod \"csi-hostpathplugin-qrls7\" (UID: \"b81b63fd-c7d6-4446-ab93-c62912586002\") " pod="hostpath-provisioner/csi-hostpathplugin-qrls7" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.472099 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-pflth\" (UniqueName: \"kubernetes.io/projected/6be72eaf-a179-4e2b-a12d-4b5dbb213183-kube-api-access-pflth\") pod \"dns-operator-799b87ffcd-9b988\" (UID: \"6be72eaf-a179-4e2b-a12d-4b5dbb213183\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-9b988" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.472113 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bd2df11-789d-4a3f-a7c4-2d6afbe38d0f-etcd-client\") pod \"etcd-operator-69b85846b6-k26tc\" (UID: \"1bd2df11-789d-4a3f-a7c4-2d6afbe38d0f\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-k26tc" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.472129 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc-registry-tls\") pod \"image-registry-66587d64c8-s6hn4\" (UID: \"1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc\") " pod="openshift-image-registry/image-registry-66587d64c8-s6hn4" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.472161 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a272b1fd-864b-4107-a4fd-6f6ab82a1d34-console-config\") pod \"console-64d44f6ddf-dhfvx\" (UID: \"a272b1fd-864b-4107-a4fd-6f6ab82a1d34\") " pod="openshift-console/console-64d44f6ddf-dhfvx" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.472194 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b81b63fd-c7d6-4446-ab93-c62912586002-registration-dir\") pod \"csi-hostpathplugin-qrls7\" (UID: \"b81b63fd-c7d6-4446-ab93-c62912586002\") " pod="hostpath-provisioner/csi-hostpathplugin-qrls7" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.472209 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/1bd2df11-789d-4a3f-a7c4-2d6afbe38d0f-tmp-dir\") pod \"etcd-operator-69b85846b6-k26tc\" (UID: \"1bd2df11-789d-4a3f-a7c4-2d6afbe38d0f\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-k26tc" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.472239 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/b81b63fd-c7d6-4446-ab93-c62912586002-mountpoint-dir\") pod \"csi-hostpathplugin-qrls7\" (UID: \"b81b63fd-c7d6-4446-ab93-c62912586002\") " pod="hostpath-provisioner/csi-hostpathplugin-qrls7" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.472260 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/163e109f-c588-4057-a961-86bcca55948f-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-6lgwk\" (UID: \"163e109f-c588-4057-a961-86bcca55948f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-6lgwk" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.472276 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dbad8204-9790-4f15-a74c-0149d19a4785-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-6gkgz\" (UID: \"dbad8204-9790-4f15-a74c-0149d19a4785\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-6gkgz" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.472293 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-ztdrc\" (UID: \"9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e\") " pod="openshift-authentication/oauth-openshift-66458b6674-ztdrc" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.472323 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc-registry-certificates\") pod \"image-registry-66587d64c8-s6hn4\" (UID: \"1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc\") " pod="openshift-image-registry/image-registry-66587d64c8-s6hn4" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.472340 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e8b3e0b-d963-4522-9a08-71aee0979479-config\") pod \"console-operator-67c89758df-79mps\" (UID: \"2e8b3e0b-d963-4522-9a08-71aee0979479\") " pod="openshift-console-operator/console-operator-67c89758df-79mps" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.472383 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc-bound-sa-token\") pod \"image-registry-66587d64c8-s6hn4\" (UID: \"1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc\") " pod="openshift-image-registry/image-registry-66587d64c8-s6hn4" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.472399 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/92b6ea75-6b68-454a-855f-958a2bf6150b-auth-proxy-config\") pod \"machine-approver-54c688565-487qx\" (UID: \"92b6ea75-6b68-454a-855f-958a2bf6150b\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-487qx" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.472417 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e-audit-policies\") pod \"oauth-openshift-66458b6674-ztdrc\" (UID: \"9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e\") " pod="openshift-authentication/oauth-openshift-66458b6674-ztdrc" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.472467 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/92b6ea75-6b68-454a-855f-958a2bf6150b-machine-approver-tls\") pod \"machine-approver-54c688565-487qx\" (UID: \"92b6ea75-6b68-454a-855f-958a2bf6150b\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-487qx" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.472497 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c32d3580-29a1-4299-8926-e4c9caa4ff86-cert\") pod \"ingress-canary-psjrr\" (UID: \"c32d3580-29a1-4299-8926-e4c9caa4ff86\") " pod="openshift-ingress-canary/ingress-canary-psjrr" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.472514 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7n7sq\" (UniqueName: \"kubernetes.io/projected/c32d3580-29a1-4299-8926-e4c9caa4ff86-kube-api-access-7n7sq\") pod \"ingress-canary-psjrr\" (UID: \"c32d3580-29a1-4299-8926-e4c9caa4ff86\") " pod="openshift-ingress-canary/ingress-canary-psjrr" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.473197 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/163e109f-c588-4057-a961-86bcca55948f-config\") pod \"kube-controller-manager-operator-69d5f845f8-6lgwk\" (UID: \"163e109f-c588-4057-a961-86bcca55948f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-6lgwk" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.473527 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc-installation-pull-secrets\") pod \"image-registry-66587d64c8-s6hn4\" (UID: \"1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc\") " pod="openshift-image-registry/image-registry-66587d64c8-s6hn4" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.473995 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/92b6ea75-6b68-454a-855f-958a2bf6150b-auth-proxy-config\") pod \"machine-approver-54c688565-487qx\" (UID: \"92b6ea75-6b68-454a-855f-958a2bf6150b\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-487qx" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.474294 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ada44265-dcab-408c-843e-e5c5a45aa138-signing-cabundle\") pod \"service-ca-74545575db-d69qv\" (UID: \"ada44265-dcab-408c-843e-e5c5a45aa138\") " pod="openshift-service-ca/service-ca-74545575db-d69qv" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.474459 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c987ac4d-5129-45aa-afe4-ab42b6907462-profile-collector-cert\") pod \"olm-operator-5cdf44d969-ggh59\" (UID: \"c987ac4d-5129-45aa-afe4-ab42b6907462\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-ggh59" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.474529 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-ztdrc\" (UID: \"9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e\") " pod="openshift-authentication/oauth-openshift-66458b6674-ztdrc" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.474563 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1125cbf4-59e9-464e-8305-d2fc133ae675-metrics-tls\") pod \"dns-default-c5tbq\" (UID: \"1125cbf4-59e9-464e-8305-d2fc133ae675\") " pod="openshift-dns/dns-default-c5tbq" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.474594 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5n9hq\" (UniqueName: \"kubernetes.io/projected/1125cbf4-59e9-464e-8305-d2fc133ae675-kube-api-access-5n9hq\") pod \"dns-default-c5tbq\" (UID: \"1125cbf4-59e9-464e-8305-d2fc133ae675\") " pod="openshift-dns/dns-default-c5tbq" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.474717 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bd2df11-789d-4a3f-a7c4-2d6afbe38d0f-config\") pod \"etcd-operator-69b85846b6-k26tc\" (UID: \"1bd2df11-789d-4a3f-a7c4-2d6afbe38d0f\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-k26tc" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.474767 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-ztdrc\" (UID: \"9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e\") " pod="openshift-authentication/oauth-openshift-66458b6674-ztdrc" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.474790 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/d549986a-81c9-4cd0-86b0-61e4b6700ddf-node-bootstrap-token\") pod \"machine-config-server-psb45\" (UID: \"d549986a-81c9-4cd0-86b0-61e4b6700ddf\") " pod="openshift-machine-config-operator/machine-config-server-psb45" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.475109 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc-registry-certificates\") pod \"image-registry-66587d64c8-s6hn4\" (UID: \"1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc\") " pod="openshift-image-registry/image-registry-66587d64c8-s6hn4" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.475120 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2e8b3e0b-d963-4522-9a08-71aee0979479-trusted-ca\") pod \"console-operator-67c89758df-79mps\" (UID: \"2e8b3e0b-d963-4522-9a08-71aee0979479\") " pod="openshift-console-operator/console-operator-67c89758df-79mps" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.475192 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/1bd2df11-789d-4a3f-a7c4-2d6afbe38d0f-etcd-service-ca\") pod \"etcd-operator-69b85846b6-k26tc\" (UID: \"1bd2df11-789d-4a3f-a7c4-2d6afbe38d0f\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-k26tc" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.475248 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f90a7a2-721d-4929-a4fa-fd1d2019b4cd-config\") pod \"openshift-controller-manager-operator-686468bdd5-m5ltz\" (UID: \"0f90a7a2-721d-4929-a4fa-fd1d2019b4cd\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-m5ltz" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.475303 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/163e109f-c588-4057-a961-86bcca55948f-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-6lgwk\" (UID: \"163e109f-c588-4057-a961-86bcca55948f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-6lgwk" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.475357 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc-trusted-ca\") pod \"image-registry-66587d64c8-s6hn4\" (UID: \"1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc\") " pod="openshift-image-registry/image-registry-66587d64c8-s6hn4" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.475404 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/c987ac4d-5129-45aa-afe4-ab42b6907462-tmpfs\") pod \"olm-operator-5cdf44d969-ggh59\" (UID: \"c987ac4d-5129-45aa-afe4-ab42b6907462\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-ggh59" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.475409 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/163e109f-c588-4057-a961-86bcca55948f-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-6lgwk\" (UID: \"163e109f-c588-4057-a961-86bcca55948f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-6lgwk" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.475914 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkl6n\" (UniqueName: \"kubernetes.io/projected/1bd2df11-789d-4a3f-a7c4-2d6afbe38d0f-kube-api-access-jkl6n\") pod \"etcd-operator-69b85846b6-k26tc\" (UID: \"1bd2df11-789d-4a3f-a7c4-2d6afbe38d0f\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-k26tc" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.475984 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-ztdrc\" (UID: \"9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e\") " pod="openshift-authentication/oauth-openshift-66458b6674-ztdrc" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.476011 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/1125cbf4-59e9-464e-8305-d2fc133ae675-tmp-dir\") pod \"dns-default-c5tbq\" (UID: \"1125cbf4-59e9-464e-8305-d2fc133ae675\") " pod="openshift-dns/dns-default-c5tbq" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.476053 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4c48eb41-252c-441b-9506-329d9f6b0371-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-5scww\" (UID: \"4c48eb41-252c-441b-9506-329d9f6b0371\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-5scww" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.476082 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdt96\" (UniqueName: \"kubernetes.io/projected/0157c9d2-3779-46c8-9da9-1fffa52986a6-kube-api-access-cdt96\") pod \"ingress-operator-6b9cb4dbcf-2pwhz\" (UID: \"0157c9d2-3779-46c8-9da9-1fffa52986a6\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-2pwhz" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.476101 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/c987ac4d-5129-45aa-afe4-ab42b6907462-tmpfs\") pod \"olm-operator-5cdf44d969-ggh59\" (UID: \"c987ac4d-5129-45aa-afe4-ab42b6907462\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-ggh59" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.476406 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-ztdrc\" (UID: \"9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e\") " pod="openshift-authentication/oauth-openshift-66458b6674-ztdrc" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.476487 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0157c9d2-3779-46c8-9da9-1fffa52986a6-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-2pwhz\" (UID: \"0157c9d2-3779-46c8-9da9-1fffa52986a6\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-2pwhz" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.476560 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bc5bz\" (UniqueName: \"kubernetes.io/projected/0f90a7a2-721d-4929-a4fa-fd1d2019b4cd-kube-api-access-bc5bz\") pod \"openshift-controller-manager-operator-686468bdd5-m5ltz\" (UID: \"0f90a7a2-721d-4929-a4fa-fd1d2019b4cd\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-m5ltz" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.476585 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nscd7\" (UniqueName: \"kubernetes.io/projected/2e8b3e0b-d963-4522-9a08-71aee0979479-kube-api-access-nscd7\") pod \"console-operator-67c89758df-79mps\" (UID: \"2e8b3e0b-d963-4522-9a08-71aee0979479\") " pod="openshift-console-operator/console-operator-67c89758df-79mps" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.476733 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a272b1fd-864b-4107-a4fd-6f6ab82a1d34-console-oauth-config\") pod \"console-64d44f6ddf-dhfvx\" (UID: \"a272b1fd-864b-4107-a4fd-6f6ab82a1d34\") " pod="openshift-console/console-64d44f6ddf-dhfvx" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.476772 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92b6ea75-6b68-454a-855f-958a2bf6150b-config\") pod \"machine-approver-54c688565-487qx\" (UID: \"92b6ea75-6b68-454a-855f-958a2bf6150b\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-487qx" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.476812 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/8ffbd09a-f2fa-4983-84c6-c6db6ccaac49-ready\") pod \"cni-sysctl-allowlist-ds-bdhnb\" (UID: \"8ffbd09a-f2fa-4983-84c6-c6db6ccaac49\") " pod="openshift-multus/cni-sysctl-allowlist-ds-bdhnb" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.476838 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ceb6ea27-6be6-4eb2-8f56-d8ddfa3f0b0b-serving-cert\") pod \"openshift-config-operator-5777786469-v69x6\" (UID: \"ceb6ea27-6be6-4eb2-8f56-d8ddfa3f0b0b\") " pod="openshift-config-operator/openshift-config-operator-5777786469-v69x6" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.477141 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc-trusted-ca\") pod \"image-registry-66587d64c8-s6hn4\" (UID: \"1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc\") " pod="openshift-image-registry/image-registry-66587d64c8-s6hn4" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.477792 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92b6ea75-6b68-454a-855f-958a2bf6150b-config\") pod \"machine-approver-54c688565-487qx\" (UID: \"92b6ea75-6b68-454a-855f-958a2bf6150b\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-487qx" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.478457 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c987ac4d-5129-45aa-afe4-ab42b6907462-profile-collector-cert\") pod \"olm-operator-5cdf44d969-ggh59\" (UID: \"c987ac4d-5129-45aa-afe4-ab42b6907462\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-ggh59" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.479267 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/163e109f-c588-4057-a961-86bcca55948f-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-6lgwk\" (UID: \"163e109f-c588-4057-a961-86bcca55948f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-6lgwk" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.479383 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c987ac4d-5129-45aa-afe4-ab42b6907462-srv-cert\") pod \"olm-operator-5cdf44d969-ggh59\" (UID: \"c987ac4d-5129-45aa-afe4-ab42b6907462\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-ggh59" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.479832 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/78316998-7ca1-4495-997b-bad16252fa84-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-p88k2\" (UID: \"78316998-7ca1-4495-997b-bad16252fa84\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-p88k2" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.480477 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/92b6ea75-6b68-454a-855f-958a2bf6150b-machine-approver-tls\") pod \"machine-approver-54c688565-487qx\" (UID: \"92b6ea75-6b68-454a-855f-958a2bf6150b\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-487qx" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.481164 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.486145 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-q6lj7"] Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.488111 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc-registry-tls\") pod \"image-registry-66587d64c8-s6hn4\" (UID: \"1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc\") " pod="openshift-image-registry/image-registry-66587d64c8-s6hn4" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.492927 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-qkg2q"] Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.505076 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-5httz"] Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.532237 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-2cnx5" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.533577 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjjgh\" (UniqueName: \"kubernetes.io/projected/ada44265-dcab-408c-843e-e5c5a45aa138-kube-api-access-wjjgh\") pod \"service-ca-74545575db-d69qv\" (UID: \"ada44265-dcab-408c-843e-e5c5a45aa138\") " pod="openshift-service-ca/service-ca-74545575db-d69qv" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.544537 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-85wdh"] Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.547393 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-54w78" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.548202 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-rdv9c" event={"ID":"3a9ac21c-f3fb-42c7-a5ce-096d015b8d3c","Type":"ContainerStarted","Data":"816097fb59ca861735960fa8c873f9eb211a033a3d03ed41d1527d9856c3c611"} Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.553508 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-68cf44c8b8-rscz2" event={"ID":"fe85cb02-2d21-4fc3-92c1-6d060a006011","Type":"ContainerStarted","Data":"5d6e7de5f45a51861e6d1bfdd0ea8e46adcfa20d1b15dc4ad806e354d6b1d44d"} Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.566562 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mdlcd\" (UniqueName: \"kubernetes.io/projected/92b6ea75-6b68-454a-855f-958a2bf6150b-kube-api-access-mdlcd\") pod \"machine-approver-54c688565-487qx\" (UID: \"92b6ea75-6b68-454a-855f-958a2bf6150b\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-487qx" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.568146 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-bhk9x"] Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.578731 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-ztdrc\" (UID: \"9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e\") " pod="openshift-authentication/oauth-openshift-66458b6674-ztdrc" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.578771 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-ztdrc\" (UID: \"9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e\") " pod="openshift-authentication/oauth-openshift-66458b6674-ztdrc" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.578794 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2e8b3e0b-d963-4522-9a08-71aee0979479-serving-cert\") pod \"console-operator-67c89758df-79mps\" (UID: \"2e8b3e0b-d963-4522-9a08-71aee0979479\") " pod="openshift-console-operator/console-operator-67c89758df-79mps" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.578985 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-s6hn4\" (UID: \"1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc\") " pod="openshift-image-registry/image-registry-66587d64c8-s6hn4" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.579041 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0f90a7a2-721d-4929-a4fa-fd1d2019b4cd-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-m5ltz\" (UID: \"0f90a7a2-721d-4929-a4fa-fd1d2019b4cd\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-m5ltz" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.579076 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/1bd2df11-789d-4a3f-a7c4-2d6afbe38d0f-etcd-ca\") pod \"etcd-operator-69b85846b6-k26tc\" (UID: \"1bd2df11-789d-4a3f-a7c4-2d6afbe38d0f\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-k26tc" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.579097 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7kmh9\" (UniqueName: \"kubernetes.io/projected/8ffbd09a-f2fa-4983-84c6-c6db6ccaac49-kube-api-access-7kmh9\") pod \"cni-sysctl-allowlist-ds-bdhnb\" (UID: \"8ffbd09a-f2fa-4983-84c6-c6db6ccaac49\") " pod="openshift-multus/cni-sysctl-allowlist-ds-bdhnb" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.579114 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/d549986a-81c9-4cd0-86b0-61e4b6700ddf-certs\") pod \"machine-config-server-psb45\" (UID: \"d549986a-81c9-4cd0-86b0-61e4b6700ddf\") " pod="openshift-machine-config-operator/machine-config-server-psb45" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.579147 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a272b1fd-864b-4107-a4fd-6f6ab82a1d34-trusted-ca-bundle\") pod \"console-64d44f6ddf-dhfvx\" (UID: \"a272b1fd-864b-4107-a4fd-6f6ab82a1d34\") " pod="openshift-console/console-64d44f6ddf-dhfvx" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.579195 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jqx7r\" (UniqueName: \"kubernetes.io/projected/dbad8204-9790-4f15-a74c-0149d19a4785-kube-api-access-jqx7r\") pod \"kube-storage-version-migrator-operator-565b79b866-6gkgz\" (UID: \"dbad8204-9790-4f15-a74c-0149d19a4785\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-6gkgz" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.579214 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-ztdrc\" (UID: \"9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e\") " pod="openshift-authentication/oauth-openshift-66458b6674-ztdrc" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.579237 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1125cbf4-59e9-464e-8305-d2fc133ae675-config-volume\") pod \"dns-default-c5tbq\" (UID: \"1125cbf4-59e9-464e-8305-d2fc133ae675\") " pod="openshift-dns/dns-default-c5tbq" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.579254 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c48eb41-252c-441b-9506-329d9f6b0371-config\") pod \"authentication-operator-7f5c659b84-5scww\" (UID: \"4c48eb41-252c-441b-9506-329d9f6b0371\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-5scww" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.579268 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4c48eb41-252c-441b-9506-329d9f6b0371-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-5scww\" (UID: \"4c48eb41-252c-441b-9506-329d9f6b0371\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-5scww" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.579285 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b81b63fd-c7d6-4446-ab93-c62912586002-socket-dir\") pod \"csi-hostpathplugin-qrls7\" (UID: \"b81b63fd-c7d6-4446-ab93-c62912586002\") " pod="hostpath-provisioner/csi-hostpathplugin-qrls7" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.579301 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/b81b63fd-c7d6-4446-ab93-c62912586002-csi-data-dir\") pod \"csi-hostpathplugin-qrls7\" (UID: \"b81b63fd-c7d6-4446-ab93-c62912586002\") " pod="hostpath-provisioner/csi-hostpathplugin-qrls7" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.579319 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/8ffbd09a-f2fa-4983-84c6-c6db6ccaac49-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-bdhnb\" (UID: \"8ffbd09a-f2fa-4983-84c6-c6db6ccaac49\") " pod="openshift-multus/cni-sysctl-allowlist-ds-bdhnb" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.579351 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-kp6j7\" (UniqueName: \"kubernetes.io/projected/a272b1fd-864b-4107-a4fd-6f6ab82a1d34-kube-api-access-kp6j7\") pod \"console-64d44f6ddf-dhfvx\" (UID: \"a272b1fd-864b-4107-a4fd-6f6ab82a1d34\") " pod="openshift-console/console-64d44f6ddf-dhfvx" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.579372 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a272b1fd-864b-4107-a4fd-6f6ab82a1d34-oauth-serving-cert\") pod \"console-64d44f6ddf-dhfvx\" (UID: \"a272b1fd-864b-4107-a4fd-6f6ab82a1d34\") " pod="openshift-console/console-64d44f6ddf-dhfvx" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.579390 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-ztdrc\" (UID: \"9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e\") " pod="openshift-authentication/oauth-openshift-66458b6674-ztdrc" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.579414 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0157c9d2-3779-46c8-9da9-1fffa52986a6-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-2pwhz\" (UID: \"0157c9d2-3779-46c8-9da9-1fffa52986a6\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-2pwhz" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.579438 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5dq99\" (UniqueName: \"kubernetes.io/projected/4c48eb41-252c-441b-9506-329d9f6b0371-kube-api-access-5dq99\") pod \"authentication-operator-7f5c659b84-5scww\" (UID: \"4c48eb41-252c-441b-9506-329d9f6b0371\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-5scww" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.579456 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-ztdrc\" (UID: \"9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e\") " pod="openshift-authentication/oauth-openshift-66458b6674-ztdrc" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.579473 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-t6cbk\" (UniqueName: \"kubernetes.io/projected/d549986a-81c9-4cd0-86b0-61e4b6700ddf-kube-api-access-t6cbk\") pod \"machine-config-server-psb45\" (UID: \"d549986a-81c9-4cd0-86b0-61e4b6700ddf\") " pod="openshift-machine-config-operator/machine-config-server-psb45" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.579513 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a272b1fd-864b-4107-a4fd-6f6ab82a1d34-console-serving-cert\") pod \"console-64d44f6ddf-dhfvx\" (UID: \"a272b1fd-864b-4107-a4fd-6f6ab82a1d34\") " pod="openshift-console/console-64d44f6ddf-dhfvx" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.579529 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a272b1fd-864b-4107-a4fd-6f6ab82a1d34-service-ca\") pod \"console-64d44f6ddf-dhfvx\" (UID: \"a272b1fd-864b-4107-a4fd-6f6ab82a1d34\") " pod="openshift-console/console-64d44f6ddf-dhfvx" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.579556 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-b6nng\" (UniqueName: \"kubernetes.io/projected/ceb6ea27-6be6-4eb2-8f56-d8ddfa3f0b0b-kube-api-access-b6nng\") pod \"openshift-config-operator-5777786469-v69x6\" (UID: \"ceb6ea27-6be6-4eb2-8f56-d8ddfa3f0b0b\") " pod="openshift-config-operator/openshift-config-operator-5777786469-v69x6" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.579580 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/ceb6ea27-6be6-4eb2-8f56-d8ddfa3f0b0b-available-featuregates\") pod \"openshift-config-operator-5777786469-v69x6\" (UID: \"ceb6ea27-6be6-4eb2-8f56-d8ddfa3f0b0b\") " pod="openshift-config-operator/openshift-config-operator-5777786469-v69x6" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.579603 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bd2df11-789d-4a3f-a7c4-2d6afbe38d0f-serving-cert\") pod \"etcd-operator-69b85846b6-k26tc\" (UID: \"1bd2df11-789d-4a3f-a7c4-2d6afbe38d0f\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-k26tc" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.579620 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0157c9d2-3779-46c8-9da9-1fffa52986a6-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-2pwhz\" (UID: \"0157c9d2-3779-46c8-9da9-1fffa52986a6\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-2pwhz" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.579640 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/b81b63fd-c7d6-4446-ab93-c62912586002-plugins-dir\") pod \"csi-hostpathplugin-qrls7\" (UID: \"b81b63fd-c7d6-4446-ab93-c62912586002\") " pod="hostpath-provisioner/csi-hostpathplugin-qrls7" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.579659 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bd2df11-789d-4a3f-a7c4-2d6afbe38d0f-etcd-client\") pod \"etcd-operator-69b85846b6-k26tc\" (UID: \"1bd2df11-789d-4a3f-a7c4-2d6afbe38d0f\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-k26tc" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.579684 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a272b1fd-864b-4107-a4fd-6f6ab82a1d34-console-config\") pod \"console-64d44f6ddf-dhfvx\" (UID: \"a272b1fd-864b-4107-a4fd-6f6ab82a1d34\") " pod="openshift-console/console-64d44f6ddf-dhfvx" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.579703 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b81b63fd-c7d6-4446-ab93-c62912586002-registration-dir\") pod \"csi-hostpathplugin-qrls7\" (UID: \"b81b63fd-c7d6-4446-ab93-c62912586002\") " pod="hostpath-provisioner/csi-hostpathplugin-qrls7" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.579719 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/1bd2df11-789d-4a3f-a7c4-2d6afbe38d0f-tmp-dir\") pod \"etcd-operator-69b85846b6-k26tc\" (UID: \"1bd2df11-789d-4a3f-a7c4-2d6afbe38d0f\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-k26tc" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.579738 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/b81b63fd-c7d6-4446-ab93-c62912586002-mountpoint-dir\") pod \"csi-hostpathplugin-qrls7\" (UID: \"b81b63fd-c7d6-4446-ab93-c62912586002\") " pod="hostpath-provisioner/csi-hostpathplugin-qrls7" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.579756 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dbad8204-9790-4f15-a74c-0149d19a4785-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-6gkgz\" (UID: \"dbad8204-9790-4f15-a74c-0149d19a4785\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-6gkgz" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.579772 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-ztdrc\" (UID: \"9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e\") " pod="openshift-authentication/oauth-openshift-66458b6674-ztdrc" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.579772 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-ztdrc\" (UID: \"9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e\") " pod="openshift-authentication/oauth-openshift-66458b6674-ztdrc" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.579793 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e8b3e0b-d963-4522-9a08-71aee0979479-config\") pod \"console-operator-67c89758df-79mps\" (UID: \"2e8b3e0b-d963-4522-9a08-71aee0979479\") " pod="openshift-console-operator/console-operator-67c89758df-79mps" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.579858 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e-audit-policies\") pod \"oauth-openshift-66458b6674-ztdrc\" (UID: \"9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e\") " pod="openshift-authentication/oauth-openshift-66458b6674-ztdrc" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.579935 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c32d3580-29a1-4299-8926-e4c9caa4ff86-cert\") pod \"ingress-canary-psjrr\" (UID: \"c32d3580-29a1-4299-8926-e4c9caa4ff86\") " pod="openshift-ingress-canary/ingress-canary-psjrr" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.579954 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7n7sq\" (UniqueName: \"kubernetes.io/projected/c32d3580-29a1-4299-8926-e4c9caa4ff86-kube-api-access-7n7sq\") pod \"ingress-canary-psjrr\" (UID: \"c32d3580-29a1-4299-8926-e4c9caa4ff86\") " pod="openshift-ingress-canary/ingress-canary-psjrr" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.579977 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-ztdrc\" (UID: \"9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e\") " pod="openshift-authentication/oauth-openshift-66458b6674-ztdrc" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.579995 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1125cbf4-59e9-464e-8305-d2fc133ae675-metrics-tls\") pod \"dns-default-c5tbq\" (UID: \"1125cbf4-59e9-464e-8305-d2fc133ae675\") " pod="openshift-dns/dns-default-c5tbq" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.580011 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5n9hq\" (UniqueName: \"kubernetes.io/projected/1125cbf4-59e9-464e-8305-d2fc133ae675-kube-api-access-5n9hq\") pod \"dns-default-c5tbq\" (UID: \"1125cbf4-59e9-464e-8305-d2fc133ae675\") " pod="openshift-dns/dns-default-c5tbq" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.580034 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bd2df11-789d-4a3f-a7c4-2d6afbe38d0f-config\") pod \"etcd-operator-69b85846b6-k26tc\" (UID: \"1bd2df11-789d-4a3f-a7c4-2d6afbe38d0f\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-k26tc" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.580057 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-ztdrc\" (UID: \"9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e\") " pod="openshift-authentication/oauth-openshift-66458b6674-ztdrc" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.580076 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/d549986a-81c9-4cd0-86b0-61e4b6700ddf-node-bootstrap-token\") pod \"machine-config-server-psb45\" (UID: \"d549986a-81c9-4cd0-86b0-61e4b6700ddf\") " pod="openshift-machine-config-operator/machine-config-server-psb45" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.580099 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2e8b3e0b-d963-4522-9a08-71aee0979479-trusted-ca\") pod \"console-operator-67c89758df-79mps\" (UID: \"2e8b3e0b-d963-4522-9a08-71aee0979479\") " pod="openshift-console-operator/console-operator-67c89758df-79mps" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.580118 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/1bd2df11-789d-4a3f-a7c4-2d6afbe38d0f-etcd-service-ca\") pod \"etcd-operator-69b85846b6-k26tc\" (UID: \"1bd2df11-789d-4a3f-a7c4-2d6afbe38d0f\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-k26tc" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.580139 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f90a7a2-721d-4929-a4fa-fd1d2019b4cd-config\") pod \"openshift-controller-manager-operator-686468bdd5-m5ltz\" (UID: \"0f90a7a2-721d-4929-a4fa-fd1d2019b4cd\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-m5ltz" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.580172 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jkl6n\" (UniqueName: \"kubernetes.io/projected/1bd2df11-789d-4a3f-a7c4-2d6afbe38d0f-kube-api-access-jkl6n\") pod \"etcd-operator-69b85846b6-k26tc\" (UID: \"1bd2df11-789d-4a3f-a7c4-2d6afbe38d0f\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-k26tc" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.580190 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-ztdrc\" (UID: \"9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e\") " pod="openshift-authentication/oauth-openshift-66458b6674-ztdrc" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.580207 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/1125cbf4-59e9-464e-8305-d2fc133ae675-tmp-dir\") pod \"dns-default-c5tbq\" (UID: \"1125cbf4-59e9-464e-8305-d2fc133ae675\") " pod="openshift-dns/dns-default-c5tbq" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.580227 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4c48eb41-252c-441b-9506-329d9f6b0371-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-5scww\" (UID: \"4c48eb41-252c-441b-9506-329d9f6b0371\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-5scww" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.580245 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-cdt96\" (UniqueName: \"kubernetes.io/projected/0157c9d2-3779-46c8-9da9-1fffa52986a6-kube-api-access-cdt96\") pod \"ingress-operator-6b9cb4dbcf-2pwhz\" (UID: \"0157c9d2-3779-46c8-9da9-1fffa52986a6\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-2pwhz" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.580263 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a272b1fd-864b-4107-a4fd-6f6ab82a1d34-trusted-ca-bundle\") pod \"console-64d44f6ddf-dhfvx\" (UID: \"a272b1fd-864b-4107-a4fd-6f6ab82a1d34\") " pod="openshift-console/console-64d44f6ddf-dhfvx" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.580272 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-ztdrc\" (UID: \"9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e\") " pod="openshift-authentication/oauth-openshift-66458b6674-ztdrc" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.580339 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0157c9d2-3779-46c8-9da9-1fffa52986a6-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-2pwhz\" (UID: \"0157c9d2-3779-46c8-9da9-1fffa52986a6\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-2pwhz" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.580368 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bc5bz\" (UniqueName: \"kubernetes.io/projected/0f90a7a2-721d-4929-a4fa-fd1d2019b4cd-kube-api-access-bc5bz\") pod \"openshift-controller-manager-operator-686468bdd5-m5ltz\" (UID: \"0f90a7a2-721d-4929-a4fa-fd1d2019b4cd\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-m5ltz" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.580390 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-nscd7\" (UniqueName: \"kubernetes.io/projected/2e8b3e0b-d963-4522-9a08-71aee0979479-kube-api-access-nscd7\") pod \"console-operator-67c89758df-79mps\" (UID: \"2e8b3e0b-d963-4522-9a08-71aee0979479\") " pod="openshift-console-operator/console-operator-67c89758df-79mps" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.580407 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a272b1fd-864b-4107-a4fd-6f6ab82a1d34-console-oauth-config\") pod \"console-64d44f6ddf-dhfvx\" (UID: \"a272b1fd-864b-4107-a4fd-6f6ab82a1d34\") " pod="openshift-console/console-64d44f6ddf-dhfvx" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.580428 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/8ffbd09a-f2fa-4983-84c6-c6db6ccaac49-ready\") pod \"cni-sysctl-allowlist-ds-bdhnb\" (UID: \"8ffbd09a-f2fa-4983-84c6-c6db6ccaac49\") " pod="openshift-multus/cni-sysctl-allowlist-ds-bdhnb" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.580453 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ceb6ea27-6be6-4eb2-8f56-d8ddfa3f0b0b-serving-cert\") pod \"openshift-config-operator-5777786469-v69x6\" (UID: \"ceb6ea27-6be6-4eb2-8f56-d8ddfa3f0b0b\") " pod="openshift-config-operator/openshift-config-operator-5777786469-v69x6" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.580480 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e-audit-dir\") pod \"oauth-openshift-66458b6674-ztdrc\" (UID: \"9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e\") " pod="openshift-authentication/oauth-openshift-66458b6674-ztdrc" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.580497 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-ztdrc\" (UID: \"9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e\") " pod="openshift-authentication/oauth-openshift-66458b6674-ztdrc" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.580526 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4r4sr\" (UniqueName: \"kubernetes.io/projected/b81b63fd-c7d6-4446-ab93-c62912586002-kube-api-access-4r4sr\") pod \"csi-hostpathplugin-qrls7\" (UID: \"b81b63fd-c7d6-4446-ab93-c62912586002\") " pod="hostpath-provisioner/csi-hostpathplugin-qrls7" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.580528 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e8b3e0b-d963-4522-9a08-71aee0979479-config\") pod \"console-operator-67c89758df-79mps\" (UID: \"2e8b3e0b-d963-4522-9a08-71aee0979479\") " pod="openshift-console-operator/console-operator-67c89758df-79mps" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.580564 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/8ffbd09a-f2fa-4983-84c6-c6db6ccaac49-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-bdhnb\" (UID: \"8ffbd09a-f2fa-4983-84c6-c6db6ccaac49\") " pod="openshift-multus/cni-sysctl-allowlist-ds-bdhnb" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.580593 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f90a7a2-721d-4929-a4fa-fd1d2019b4cd-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-m5ltz\" (UID: \"0f90a7a2-721d-4929-a4fa-fd1d2019b4cd\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-m5ltz" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.580610 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dbad8204-9790-4f15-a74c-0149d19a4785-config\") pod \"kube-storage-version-migrator-operator-565b79b866-6gkgz\" (UID: \"dbad8204-9790-4f15-a74c-0149d19a4785\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-6gkgz" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.580640 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c48eb41-252c-441b-9506-329d9f6b0371-serving-cert\") pod \"authentication-operator-7f5c659b84-5scww\" (UID: \"4c48eb41-252c-441b-9506-329d9f6b0371\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-5scww" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.580662 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jb8vr\" (UniqueName: \"kubernetes.io/projected/9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e-kube-api-access-jb8vr\") pod \"oauth-openshift-66458b6674-ztdrc\" (UID: \"9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e\") " pod="openshift-authentication/oauth-openshift-66458b6674-ztdrc" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.582414 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dhskb\" (UniqueName: \"kubernetes.io/projected/1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc-kube-api-access-dhskb\") pod \"image-registry-66587d64c8-s6hn4\" (UID: \"1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc\") " pod="openshift-image-registry/image-registry-66587d64c8-s6hn4" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.582518 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-ztdrc\" (UID: \"9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e\") " pod="openshift-authentication/oauth-openshift-66458b6674-ztdrc" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.582588 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-ztdrc\" (UID: \"9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e\") " pod="openshift-authentication/oauth-openshift-66458b6674-ztdrc" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.582799 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2e8b3e0b-d963-4522-9a08-71aee0979479-trusted-ca\") pod \"console-operator-67c89758df-79mps\" (UID: \"2e8b3e0b-d963-4522-9a08-71aee0979479\") " pod="openshift-console-operator/console-operator-67c89758df-79mps" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.582890 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/ceb6ea27-6be6-4eb2-8f56-d8ddfa3f0b0b-available-featuregates\") pod \"openshift-config-operator-5777786469-v69x6\" (UID: \"ceb6ea27-6be6-4eb2-8f56-d8ddfa3f0b0b\") " pod="openshift-config-operator/openshift-config-operator-5777786469-v69x6" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.583068 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e-audit-policies\") pod \"oauth-openshift-66458b6674-ztdrc\" (UID: \"9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e\") " pod="openshift-authentication/oauth-openshift-66458b6674-ztdrc" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.583329 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1125cbf4-59e9-464e-8305-d2fc133ae675-config-volume\") pod \"dns-default-c5tbq\" (UID: \"1125cbf4-59e9-464e-8305-d2fc133ae675\") " pod="openshift-dns/dns-default-c5tbq" Dec 08 17:44:18 crc kubenswrapper[5118]: E1208 17:44:18.583409 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:19.083396964 +0000 UTC m=+115.984721058 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-s6hn4" (UID: "1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.583656 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f90a7a2-721d-4929-a4fa-fd1d2019b4cd-config\") pod \"openshift-controller-manager-operator-686468bdd5-m5ltz\" (UID: \"0f90a7a2-721d-4929-a4fa-fd1d2019b4cd\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-m5ltz" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.583980 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0f90a7a2-721d-4929-a4fa-fd1d2019b4cd-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-m5ltz\" (UID: \"0f90a7a2-721d-4929-a4fa-fd1d2019b4cd\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-m5ltz" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.583996 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c48eb41-252c-441b-9506-329d9f6b0371-config\") pod \"authentication-operator-7f5c659b84-5scww\" (UID: \"4c48eb41-252c-441b-9506-329d9f6b0371\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-5scww" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.584431 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/1125cbf4-59e9-464e-8305-d2fc133ae675-tmp-dir\") pod \"dns-default-c5tbq\" (UID: \"1125cbf4-59e9-464e-8305-d2fc133ae675\") " pod="openshift-dns/dns-default-c5tbq" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.584857 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-ztdrc\" (UID: \"9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e\") " pod="openshift-authentication/oauth-openshift-66458b6674-ztdrc" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.584957 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/b81b63fd-c7d6-4446-ab93-c62912586002-csi-data-dir\") pod \"csi-hostpathplugin-qrls7\" (UID: \"b81b63fd-c7d6-4446-ab93-c62912586002\") " pod="hostpath-provisioner/csi-hostpathplugin-qrls7" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.585332 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4c48eb41-252c-441b-9506-329d9f6b0371-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-5scww\" (UID: \"4c48eb41-252c-441b-9506-329d9f6b0371\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-5scww" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.585554 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b81b63fd-c7d6-4446-ab93-c62912586002-socket-dir\") pod \"csi-hostpathplugin-qrls7\" (UID: \"b81b63fd-c7d6-4446-ab93-c62912586002\") " pod="hostpath-provisioner/csi-hostpathplugin-qrls7" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.585851 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/1bd2df11-789d-4a3f-a7c4-2d6afbe38d0f-etcd-ca\") pod \"etcd-operator-69b85846b6-k26tc\" (UID: \"1bd2df11-789d-4a3f-a7c4-2d6afbe38d0f\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-k26tc" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.588055 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2e8b3e0b-d963-4522-9a08-71aee0979479-serving-cert\") pod \"console-operator-67c89758df-79mps\" (UID: \"2e8b3e0b-d963-4522-9a08-71aee0979479\") " pod="openshift-console-operator/console-operator-67c89758df-79mps" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.588233 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-ztdrc\" (UID: \"9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e\") " pod="openshift-authentication/oauth-openshift-66458b6674-ztdrc" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.588371 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/b81b63fd-c7d6-4446-ab93-c62912586002-mountpoint-dir\") pod \"csi-hostpathplugin-qrls7\" (UID: \"b81b63fd-c7d6-4446-ab93-c62912586002\") " pod="hostpath-provisioner/csi-hostpathplugin-qrls7" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.589099 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/d549986a-81c9-4cd0-86b0-61e4b6700ddf-certs\") pod \"machine-config-server-psb45\" (UID: \"d549986a-81c9-4cd0-86b0-61e4b6700ddf\") " pod="openshift-machine-config-operator/machine-config-server-psb45" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.589169 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4c48eb41-252c-441b-9506-329d9f6b0371-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-5scww\" (UID: \"4c48eb41-252c-441b-9506-329d9f6b0371\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-5scww" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.589280 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/8ffbd09a-f2fa-4983-84c6-c6db6ccaac49-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-bdhnb\" (UID: \"8ffbd09a-f2fa-4983-84c6-c6db6ccaac49\") " pod="openshift-multus/cni-sysctl-allowlist-ds-bdhnb" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.590371 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0157c9d2-3779-46c8-9da9-1fffa52986a6-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-2pwhz\" (UID: \"0157c9d2-3779-46c8-9da9-1fffa52986a6\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-2pwhz" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.590426 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/b81b63fd-c7d6-4446-ab93-c62912586002-plugins-dir\") pod \"csi-hostpathplugin-qrls7\" (UID: \"b81b63fd-c7d6-4446-ab93-c62912586002\") " pod="hostpath-provisioner/csi-hostpathplugin-qrls7" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.590576 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1125cbf4-59e9-464e-8305-d2fc133ae675-metrics-tls\") pod \"dns-default-c5tbq\" (UID: \"1125cbf4-59e9-464e-8305-d2fc133ae675\") " pod="openshift-dns/dns-default-c5tbq" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.591644 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bd2df11-789d-4a3f-a7c4-2d6afbe38d0f-config\") pod \"etcd-operator-69b85846b6-k26tc\" (UID: \"1bd2df11-789d-4a3f-a7c4-2d6afbe38d0f\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-k26tc" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.591939 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-4g75z"] Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.592070 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0157c9d2-3779-46c8-9da9-1fffa52986a6-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-2pwhz\" (UID: \"0157c9d2-3779-46c8-9da9-1fffa52986a6\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-2pwhz" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.592133 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-5pp5q"] Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.592930 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/8ffbd09a-f2fa-4983-84c6-c6db6ccaac49-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-bdhnb\" (UID: \"8ffbd09a-f2fa-4983-84c6-c6db6ccaac49\") " pod="openshift-multus/cni-sysctl-allowlist-ds-bdhnb" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.593300 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dbad8204-9790-4f15-a74c-0149d19a4785-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-6gkgz\" (UID: \"dbad8204-9790-4f15-a74c-0149d19a4785\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-6gkgz" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.593998 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dbad8204-9790-4f15-a74c-0149d19a4785-config\") pod \"kube-storage-version-migrator-operator-565b79b866-6gkgz\" (UID: \"dbad8204-9790-4f15-a74c-0149d19a4785\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-6gkgz" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.594929 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-ztdrc\" (UID: \"9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e\") " pod="openshift-authentication/oauth-openshift-66458b6674-ztdrc" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.595294 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b81b63fd-c7d6-4446-ab93-c62912586002-registration-dir\") pod \"csi-hostpathplugin-qrls7\" (UID: \"b81b63fd-c7d6-4446-ab93-c62912586002\") " pod="hostpath-provisioner/csi-hostpathplugin-qrls7" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.596105 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a272b1fd-864b-4107-a4fd-6f6ab82a1d34-console-config\") pod \"console-64d44f6ddf-dhfvx\" (UID: \"a272b1fd-864b-4107-a4fd-6f6ab82a1d34\") " pod="openshift-console/console-64d44f6ddf-dhfvx" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.596461 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-ztdrc\" (UID: \"9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e\") " pod="openshift-authentication/oauth-openshift-66458b6674-ztdrc" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.596528 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-ztdrc\" (UID: \"9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e\") " pod="openshift-authentication/oauth-openshift-66458b6674-ztdrc" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.596813 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bd2df11-789d-4a3f-a7c4-2d6afbe38d0f-etcd-client\") pod \"etcd-operator-69b85846b6-k26tc\" (UID: \"1bd2df11-789d-4a3f-a7c4-2d6afbe38d0f\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-k26tc" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.596814 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a272b1fd-864b-4107-a4fd-6f6ab82a1d34-console-oauth-config\") pod \"console-64d44f6ddf-dhfvx\" (UID: \"a272b1fd-864b-4107-a4fd-6f6ab82a1d34\") " pod="openshift-console/console-64d44f6ddf-dhfvx" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.596964 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bd2df11-789d-4a3f-a7c4-2d6afbe38d0f-serving-cert\") pod \"etcd-operator-69b85846b6-k26tc\" (UID: \"1bd2df11-789d-4a3f-a7c4-2d6afbe38d0f\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-k26tc" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.597312 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a272b1fd-864b-4107-a4fd-6f6ab82a1d34-oauth-serving-cert\") pod \"console-64d44f6ddf-dhfvx\" (UID: \"a272b1fd-864b-4107-a4fd-6f6ab82a1d34\") " pod="openshift-console/console-64d44f6ddf-dhfvx" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.597313 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/1bd2df11-789d-4a3f-a7c4-2d6afbe38d0f-tmp-dir\") pod \"etcd-operator-69b85846b6-k26tc\" (UID: \"1bd2df11-789d-4a3f-a7c4-2d6afbe38d0f\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-k26tc" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.597368 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/8ffbd09a-f2fa-4983-84c6-c6db6ccaac49-ready\") pod \"cni-sysctl-allowlist-ds-bdhnb\" (UID: \"8ffbd09a-f2fa-4983-84c6-c6db6ccaac49\") " pod="openshift-multus/cni-sysctl-allowlist-ds-bdhnb" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.597383 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e-audit-dir\") pod \"oauth-openshift-66458b6674-ztdrc\" (UID: \"9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e\") " pod="openshift-authentication/oauth-openshift-66458b6674-ztdrc" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.597911 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/d549986a-81c9-4cd0-86b0-61e4b6700ddf-node-bootstrap-token\") pod \"machine-config-server-psb45\" (UID: \"d549986a-81c9-4cd0-86b0-61e4b6700ddf\") " pod="openshift-machine-config-operator/machine-config-server-psb45" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.598138 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a272b1fd-864b-4107-a4fd-6f6ab82a1d34-service-ca\") pod \"console-64d44f6ddf-dhfvx\" (UID: \"a272b1fd-864b-4107-a4fd-6f6ab82a1d34\") " pod="openshift-console/console-64d44f6ddf-dhfvx" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.598651 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a272b1fd-864b-4107-a4fd-6f6ab82a1d34-console-serving-cert\") pod \"console-64d44f6ddf-dhfvx\" (UID: \"a272b1fd-864b-4107-a4fd-6f6ab82a1d34\") " pod="openshift-console/console-64d44f6ddf-dhfvx" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.598709 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-ztdrc\" (UID: \"9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e\") " pod="openshift-authentication/oauth-openshift-66458b6674-ztdrc" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.599077 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-ztdrc\" (UID: \"9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e\") " pod="openshift-authentication/oauth-openshift-66458b6674-ztdrc" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.600389 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ceb6ea27-6be6-4eb2-8f56-d8ddfa3f0b0b-serving-cert\") pod \"openshift-config-operator-5777786469-v69x6\" (UID: \"ceb6ea27-6be6-4eb2-8f56-d8ddfa3f0b0b\") " pod="openshift-config-operator/openshift-config-operator-5777786469-v69x6" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.600581 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f90a7a2-721d-4929-a4fa-fd1d2019b4cd-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-m5ltz\" (UID: \"0f90a7a2-721d-4929-a4fa-fd1d2019b4cd\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-m5ltz" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.600675 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/163e109f-c588-4057-a961-86bcca55948f-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-6lgwk\" (UID: \"163e109f-c588-4057-a961-86bcca55948f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-6lgwk" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.605047 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-ztdrc\" (UID: \"9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e\") " pod="openshift-authentication/oauth-openshift-66458b6674-ztdrc" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.616313 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwj8f\" (UniqueName: \"kubernetes.io/projected/c987ac4d-5129-45aa-afe4-ab42b6907462-kube-api-access-zwj8f\") pod \"olm-operator-5cdf44d969-ggh59\" (UID: \"c987ac4d-5129-45aa-afe4-ab42b6907462\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-ggh59" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.630537 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-pw7lm\" (UniqueName: \"kubernetes.io/projected/78316998-7ca1-4495-997b-bad16252fa84-kube-api-access-pw7lm\") pod \"machine-config-controller-f9cdd68f7-p88k2\" (UID: \"78316998-7ca1-4495-997b-bad16252fa84\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-p88k2" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.638726 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/1bd2df11-789d-4a3f-a7c4-2d6afbe38d0f-etcd-service-ca\") pod \"etcd-operator-69b85846b6-k26tc\" (UID: \"1bd2df11-789d-4a3f-a7c4-2d6afbe38d0f\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-k26tc" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.644235 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c48eb41-252c-441b-9506-329d9f6b0371-serving-cert\") pod \"authentication-operator-7f5c659b84-5scww\" (UID: \"4c48eb41-252c-441b-9506-329d9f6b0371\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-5scww" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.657203 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c32d3580-29a1-4299-8926-e4c9caa4ff86-cert\") pod \"ingress-canary-psjrr\" (UID: \"c32d3580-29a1-4299-8926-e4c9caa4ff86\") " pod="openshift-ingress-canary/ingress-canary-psjrr" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.658350 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-dhfht"] Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.660821 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc-bound-sa-token\") pod \"image-registry-66587d64c8-s6hn4\" (UID: \"1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc\") " pod="openshift-image-registry/image-registry-66587d64c8-s6hn4" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.669523 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-6lgwk" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.681166 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:18 crc kubenswrapper[5118]: E1208 17:44:18.681494 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:19.181472049 +0000 UTC m=+116.082796143 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.683730 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-74545575db-d69qv" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.684150 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-pflth\" (UniqueName: \"kubernetes.io/projected/6be72eaf-a179-4e2b-a12d-4b5dbb213183-kube-api-access-pflth\") pod \"dns-operator-799b87ffcd-9b988\" (UID: \"6be72eaf-a179-4e2b-a12d-4b5dbb213183\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-9b988" Dec 08 17:44:18 crc kubenswrapper[5118]: W1208 17:44:18.689007 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0b3a0959_d09e_4fd8_b931_d85bb42a3896.slice/crio-7e94bf2ba52806324d32672c1909b65af02786b0323b6863e98b0796a6cc858a WatchSource:0}: Error finding container 7e94bf2ba52806324d32672c1909b65af02786b0323b6863e98b0796a6cc858a: Status 404 returned error can't find the container with id 7e94bf2ba52806324d32672c1909b65af02786b0323b6863e98b0796a6cc858a Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.702193 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-bl822"] Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.714797 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4r4sr\" (UniqueName: \"kubernetes.io/projected/b81b63fd-c7d6-4446-ab93-c62912586002-kube-api-access-4r4sr\") pod \"csi-hostpathplugin-qrls7\" (UID: \"b81b63fd-c7d6-4446-ab93-c62912586002\") " pod="hostpath-provisioner/csi-hostpathplugin-qrls7" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.716704 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-4kjg6"] Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.728245 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-ggh59" Dec 08 17:44:18 crc kubenswrapper[5118]: W1208 17:44:18.733003 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9a815eca_9800_4b68_adc1_5953173f4427.slice/crio-2b5aafb4cf2fe32e9e7cc2fbf243cade2e2c4a4f237dea8c6de46a683a65283d WatchSource:0}: Error finding container 2b5aafb4cf2fe32e9e7cc2fbf243cade2e2c4a4f237dea8c6de46a683a65283d: Status 404 returned error can't find the container with id 2b5aafb4cf2fe32e9e7cc2fbf243cade2e2c4a4f237dea8c6de46a683a65283d Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.734046 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-54c688565-487qx" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.751188 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jb8vr\" (UniqueName: \"kubernetes.io/projected/9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e-kube-api-access-jb8vr\") pod \"oauth-openshift-66458b6674-ztdrc\" (UID: \"9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e\") " pod="openshift-authentication/oauth-openshift-66458b6674-ztdrc" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.763905 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jkl6n\" (UniqueName: \"kubernetes.io/projected/1bd2df11-789d-4a3f-a7c4-2d6afbe38d0f-kube-api-access-jkl6n\") pod \"etcd-operator-69b85846b6-k26tc\" (UID: \"1bd2df11-789d-4a3f-a7c4-2d6afbe38d0f\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-k26tc" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.768159 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-799b87ffcd-9b988" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.782768 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-s6hn4\" (UID: \"1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc\") " pod="openshift-image-registry/image-registry-66587d64c8-s6hn4" Dec 08 17:44:18 crc kubenswrapper[5118]: E1208 17:44:18.783136 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:19.283123021 +0000 UTC m=+116.184447115 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-s6hn4" (UID: "1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.783779 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqx7r\" (UniqueName: \"kubernetes.io/projected/dbad8204-9790-4f15-a74c-0149d19a4785-kube-api-access-jqx7r\") pod \"kube-storage-version-migrator-operator-565b79b866-6gkgz\" (UID: \"dbad8204-9790-4f15-a74c-0149d19a4785\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-6gkgz" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.803976 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6nng\" (UniqueName: \"kubernetes.io/projected/ceb6ea27-6be6-4eb2-8f56-d8ddfa3f0b0b-kube-api-access-b6nng\") pod \"openshift-config-operator-5777786469-v69x6\" (UID: \"ceb6ea27-6be6-4eb2-8f56-d8ddfa3f0b0b\") " pod="openshift-config-operator/openshift-config-operator-5777786469-v69x6" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.817582 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bc5bz\" (UniqueName: \"kubernetes.io/projected/0f90a7a2-721d-4929-a4fa-fd1d2019b4cd-kube-api-access-bc5bz\") pod \"openshift-controller-manager-operator-686468bdd5-m5ltz\" (UID: \"0f90a7a2-721d-4929-a4fa-fd1d2019b4cd\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-m5ltz" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.834607 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0157c9d2-3779-46c8-9da9-1fffa52986a6-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-2pwhz\" (UID: \"0157c9d2-3779-46c8-9da9-1fffa52986a6\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-2pwhz" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.856243 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5dq99\" (UniqueName: \"kubernetes.io/projected/4c48eb41-252c-441b-9506-329d9f6b0371-kube-api-access-5dq99\") pod \"authentication-operator-7f5c659b84-5scww\" (UID: \"4c48eb41-252c-441b-9506-329d9f6b0371\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-5scww" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.861993 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-69b85846b6-k26tc" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.878941 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-cdz4v"] Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.879158 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-m5ltz" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.884494 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:18 crc kubenswrapper[5118]: E1208 17:44:18.884774 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:19.384731624 +0000 UTC m=+116.286055718 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.885017 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-s6hn4\" (UID: \"1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc\") " pod="openshift-image-registry/image-registry-66587d64c8-s6hn4" Dec 08 17:44:18 crc kubenswrapper[5118]: E1208 17:44:18.885326 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:19.385304419 +0000 UTC m=+116.286628513 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-s6hn4" (UID: "1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.888565 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-nscd7\" (UniqueName: \"kubernetes.io/projected/2e8b3e0b-d963-4522-9a08-71aee0979479-kube-api-access-nscd7\") pod \"console-operator-67c89758df-79mps\" (UID: \"2e8b3e0b-d963-4522-9a08-71aee0979479\") " pod="openshift-console-operator/console-operator-67c89758df-79mps" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.892207 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-6gkgz" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.894268 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kmh9\" (UniqueName: \"kubernetes.io/projected/8ffbd09a-f2fa-4983-84c6-c6db6ccaac49-kube-api-access-7kmh9\") pod \"cni-sysctl-allowlist-ds-bdhnb\" (UID: \"8ffbd09a-f2fa-4983-84c6-c6db6ccaac49\") " pod="openshift-multus/cni-sysctl-allowlist-ds-bdhnb" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.899086 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-ztdrc" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.906051 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-5scww" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.911663 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-5777786469-v69x6" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.915573 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7n7sq\" (UniqueName: \"kubernetes.io/projected/c32d3580-29a1-4299-8926-e4c9caa4ff86-kube-api-access-7n7sq\") pod \"ingress-canary-psjrr\" (UID: \"c32d3580-29a1-4299-8926-e4c9caa4ff86\") " pod="openshift-ingress-canary/ingress-canary-psjrr" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.923211 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-p88k2" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.926462 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-bdhnb" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.935039 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6cbk\" (UniqueName: \"kubernetes.io/projected/d549986a-81c9-4cd0-86b0-61e4b6700ddf-kube-api-access-t6cbk\") pod \"machine-config-server-psb45\" (UID: \"d549986a-81c9-4cd0-86b0-61e4b6700ddf\") " pod="openshift-machine-config-operator/machine-config-server-psb45" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.942636 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-qrls7" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.948302 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-psjrr" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.954189 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdt96\" (UniqueName: \"kubernetes.io/projected/0157c9d2-3779-46c8-9da9-1fffa52986a6-kube-api-access-cdt96\") pod \"ingress-operator-6b9cb4dbcf-2pwhz\" (UID: \"0157c9d2-3779-46c8-9da9-1fffa52986a6\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-2pwhz" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.989422 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:18 crc kubenswrapper[5118]: E1208 17:44:18.989572 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:19.489544112 +0000 UTC m=+116.390868206 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.989849 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-s6hn4\" (UID: \"1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc\") " pod="openshift-image-registry/image-registry-66587d64c8-s6hn4" Dec 08 17:44:18 crc kubenswrapper[5118]: E1208 17:44:18.990495 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:19.490487758 +0000 UTC m=+116.391811852 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-s6hn4" (UID: "1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.992143 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-6lgwk"] Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.994082 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5n9hq\" (UniqueName: \"kubernetes.io/projected/1125cbf4-59e9-464e-8305-d2fc133ae675-kube-api-access-5n9hq\") pod \"dns-default-c5tbq\" (UID: \"1125cbf4-59e9-464e-8305-d2fc133ae675\") " pod="openshift-dns/dns-default-c5tbq" Dec 08 17:44:18 crc kubenswrapper[5118]: I1208 17:44:18.995698 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-kp6j7\" (UniqueName: \"kubernetes.io/projected/a272b1fd-864b-4107-a4fd-6f6ab82a1d34-kube-api-access-kp6j7\") pod \"console-64d44f6ddf-dhfvx\" (UID: \"a272b1fd-864b-4107-a4fd-6f6ab82a1d34\") " pod="openshift-console/console-64d44f6ddf-dhfvx" Dec 08 17:44:19 crc kubenswrapper[5118]: I1208 17:44:19.020418 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-747b44746d-x7wvx"] Dec 08 17:44:19 crc kubenswrapper[5118]: I1208 17:44:19.067772 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-8h8fl"] Dec 08 17:44:19 crc kubenswrapper[5118]: I1208 17:44:19.091335 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:19 crc kubenswrapper[5118]: E1208 17:44:19.091545 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:19.591518333 +0000 UTC m=+116.492842417 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:19 crc kubenswrapper[5118]: I1208 17:44:19.092030 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-s6hn4\" (UID: \"1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc\") " pod="openshift-image-registry/image-registry-66587d64c8-s6hn4" Dec 08 17:44:19 crc kubenswrapper[5118]: E1208 17:44:19.092366 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:19.592359217 +0000 UTC m=+116.493683311 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-s6hn4" (UID: "1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:19 crc kubenswrapper[5118]: I1208 17:44:19.142787 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-c5tbq" Dec 08 17:44:19 crc kubenswrapper[5118]: I1208 17:44:19.154136 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-67c89758df-79mps" Dec 08 17:44:19 crc kubenswrapper[5118]: I1208 17:44:19.171097 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-64d44f6ddf-dhfvx" Dec 08 17:44:19 crc kubenswrapper[5118]: I1208 17:44:19.173837 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-gvb6q"] Dec 08 17:44:19 crc kubenswrapper[5118]: I1208 17:44:19.184924 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-2pwhz" Dec 08 17:44:19 crc kubenswrapper[5118]: I1208 17:44:19.193621 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:19 crc kubenswrapper[5118]: E1208 17:44:19.193964 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:19.693925867 +0000 UTC m=+116.595249961 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:19 crc kubenswrapper[5118]: I1208 17:44:19.194450 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-s6hn4\" (UID: \"1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc\") " pod="openshift-image-registry/image-registry-66587d64c8-s6hn4" Dec 08 17:44:19 crc kubenswrapper[5118]: E1208 17:44:19.195040 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:19.695030387 +0000 UTC m=+116.596354481 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-s6hn4" (UID: "1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:19 crc kubenswrapper[5118]: I1208 17:44:19.217641 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-psb45" Dec 08 17:44:19 crc kubenswrapper[5118]: I1208 17:44:19.295126 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-54w78"] Dec 08 17:44:19 crc kubenswrapper[5118]: I1208 17:44:19.295967 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:19 crc kubenswrapper[5118]: E1208 17:44:19.296453 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:19.796432824 +0000 UTC m=+116.697756918 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:19 crc kubenswrapper[5118]: I1208 17:44:19.297084 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-s6hn4\" (UID: \"1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc\") " pod="openshift-image-registry/image-registry-66587d64c8-s6hn4" Dec 08 17:44:19 crc kubenswrapper[5118]: E1208 17:44:19.297362 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:19.797355669 +0000 UTC m=+116.698679753 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-s6hn4" (UID: "1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:19 crc kubenswrapper[5118]: I1208 17:44:19.347586 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-2cnx5"] Dec 08 17:44:19 crc kubenswrapper[5118]: I1208 17:44:19.400408 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:19 crc kubenswrapper[5118]: E1208 17:44:19.401063 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:19.901036247 +0000 UTC m=+116.802360341 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:19 crc kubenswrapper[5118]: W1208 17:44:19.402711 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda52d6e07_c08e_4424_8a3f_50052c311604.slice/crio-4ed40421c9121dda3498bb864df7f9530153e9b4f72cb4cbfe60b409b4540405 WatchSource:0}: Error finding container 4ed40421c9121dda3498bb864df7f9530153e9b4f72cb4cbfe60b409b4540405: Status 404 returned error can't find the container with id 4ed40421c9121dda3498bb864df7f9530153e9b4f72cb4cbfe60b409b4540405 Dec 08 17:44:19 crc kubenswrapper[5118]: I1208 17:44:19.447637 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-ggh59"] Dec 08 17:44:19 crc kubenswrapper[5118]: I1208 17:44:19.502016 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-s6hn4\" (UID: \"1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc\") " pod="openshift-image-registry/image-registry-66587d64c8-s6hn4" Dec 08 17:44:19 crc kubenswrapper[5118]: E1208 17:44:19.502359 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:20.00234251 +0000 UTC m=+116.903666604 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-s6hn4" (UID: "1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:19 crc kubenswrapper[5118]: I1208 17:44:19.560985 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-9b988"] Dec 08 17:44:19 crc kubenswrapper[5118]: I1208 17:44:19.572469 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-487qx" event={"ID":"92b6ea75-6b68-454a-855f-958a2bf6150b","Type":"ContainerStarted","Data":"36311e0cc1f78fdc8ffedb0e1193555d9b824bd4e443818a4256036117680d11"} Dec 08 17:44:19 crc kubenswrapper[5118]: I1208 17:44:19.586242 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-74545575db-d69qv"] Dec 08 17:44:19 crc kubenswrapper[5118]: I1208 17:44:19.593094 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-gvb6q" event={"ID":"a52d6e07-c08e-4424-8a3f-50052c311604","Type":"ContainerStarted","Data":"4ed40421c9121dda3498bb864df7f9530153e9b4f72cb4cbfe60b409b4540405"} Dec 08 17:44:19 crc kubenswrapper[5118]: I1208 17:44:19.603025 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:19 crc kubenswrapper[5118]: E1208 17:44:19.603189 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:20.10316134 +0000 UTC m=+117.004485434 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:19 crc kubenswrapper[5118]: I1208 17:44:19.603551 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-s6hn4\" (UID: \"1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc\") " pod="openshift-image-registry/image-registry-66587d64c8-s6hn4" Dec 08 17:44:19 crc kubenswrapper[5118]: E1208 17:44:19.603983 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:20.103976222 +0000 UTC m=+117.005300316 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-s6hn4" (UID: "1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:19 crc kubenswrapper[5118]: I1208 17:44:19.613251 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-rwgjl"] Dec 08 17:44:19 crc kubenswrapper[5118]: I1208 17:44:19.619679 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-v9sxk" event={"ID":"f5c1e280-e9c9-4a30-bb13-023852fd940b","Type":"ContainerStarted","Data":"c9e7e4c5beb239d44fda84d69d0bf1de0551d5bbed4fbadbcd54641355317b4f"} Dec 08 17:44:19 crc kubenswrapper[5118]: I1208 17:44:19.622298 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-8h8fl" event={"ID":"695dd41c-159e-4e22-98e5-e27fdf4296fd","Type":"ContainerStarted","Data":"55ee1749961e18b627e9b4aefcb2a91f1db0f87d70ada2e87c92d1f25343c71e"} Dec 08 17:44:19 crc kubenswrapper[5118]: I1208 17:44:19.653233 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-6lgwk" event={"ID":"163e109f-c588-4057-a961-86bcca55948f","Type":"ContainerStarted","Data":"851c6622d148b06bc6416c3a5117faa5c7f1cd7ee3259489350f9b5c41877051"} Dec 08 17:44:19 crc kubenswrapper[5118]: I1208 17:44:19.700954 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-cdz4v" event={"ID":"2554c491-6bfb-47fd-9b76-c1da12e702b1","Type":"ContainerStarted","Data":"38d50ce5086da05afa7a3898ce13b67b33051171c36a2939e44da253e41ced09"} Dec 08 17:44:19 crc kubenswrapper[5118]: I1208 17:44:19.704591 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:19 crc kubenswrapper[5118]: E1208 17:44:19.704962 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:20.204944796 +0000 UTC m=+117.106268890 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:19 crc kubenswrapper[5118]: I1208 17:44:19.709908 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-bl822" event={"ID":"9a815eca-9800-4b68-adc1-5953173f4427","Type":"ContainerStarted","Data":"2b5aafb4cf2fe32e9e7cc2fbf243cade2e2c4a4f237dea8c6de46a683a65283d"} Dec 08 17:44:19 crc kubenswrapper[5118]: I1208 17:44:19.728302 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-d8qsj" event={"ID":"9148080a-77e2-4847-840a-d67f837c8fbe","Type":"ContainerStarted","Data":"cc277b698df2022738d4f387f965376a0334471715e7cc976b378deefc3d5268"} Dec 08 17:44:19 crc kubenswrapper[5118]: I1208 17:44:19.728346 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-d8qsj" event={"ID":"9148080a-77e2-4847-840a-d67f837c8fbe","Type":"ContainerStarted","Data":"040d0a9b0bab7159bf6dba4662d179754d736d11236a3e461026f5afe21acae1"} Dec 08 17:44:19 crc kubenswrapper[5118]: I1208 17:44:19.780786 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-5httz" event={"ID":"1a749ad3-837c-4804-b23c-2abb017b5b82","Type":"ContainerStarted","Data":"2c5378bcb13a4698733a97514dd6db316e01001973c5284a98ae434e1e62677c"} Dec 08 17:44:19 crc kubenswrapper[5118]: I1208 17:44:19.780829 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-5httz" event={"ID":"1a749ad3-837c-4804-b23c-2abb017b5b82","Type":"ContainerStarted","Data":"6a4e4c5b074f1d16ab88e055d0f9e1cfa752bd041cd8492fbc0bc4919735264a"} Dec 08 17:44:19 crc kubenswrapper[5118]: I1208 17:44:19.782070 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-qkg2q" event={"ID":"32bb589d-b6b8-4ab2-a9a2-5bae968bd2c6","Type":"ContainerStarted","Data":"c43168f83a6d51b5e882078c0183a3effc8258c2f200874ceb15ad3cc30aad5f"} Dec 08 17:44:19 crc kubenswrapper[5118]: I1208 17:44:19.782088 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-qkg2q" event={"ID":"32bb589d-b6b8-4ab2-a9a2-5bae968bd2c6","Type":"ContainerStarted","Data":"68c350dfaec5080e8a88faabfaf27154a6c5538a37e7bd8bd70c0353c8cdd2ad"} Dec 08 17:44:19 crc kubenswrapper[5118]: I1208 17:44:19.782864 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-qkg2q" Dec 08 17:44:19 crc kubenswrapper[5118]: I1208 17:44:19.792809 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-dhfht" event={"ID":"0b3a0959-d09e-4fd8-b931-d85bb42a3896","Type":"ContainerStarted","Data":"87eea6756b10355c4f73bba8b36bf7ead92d1d9acfc85e86ebb9a7df64d4def2"} Dec 08 17:44:19 crc kubenswrapper[5118]: I1208 17:44:19.792845 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-dhfht" event={"ID":"0b3a0959-d09e-4fd8-b931-d85bb42a3896","Type":"ContainerStarted","Data":"7e94bf2ba52806324d32672c1909b65af02786b0323b6863e98b0796a6cc858a"} Dec 08 17:44:19 crc kubenswrapper[5118]: I1208 17:44:19.800208 5118 patch_prober.go:28] interesting pod/route-controller-manager-776cdc94d6-qkg2q container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Dec 08 17:44:19 crc kubenswrapper[5118]: I1208 17:44:19.800284 5118 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-qkg2q" podUID="32bb589d-b6b8-4ab2-a9a2-5bae968bd2c6" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Dec 08 17:44:19 crc kubenswrapper[5118]: I1208 17:44:19.807042 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-s6hn4\" (UID: \"1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc\") " pod="openshift-image-registry/image-registry-66587d64c8-s6hn4" Dec 08 17:44:19 crc kubenswrapper[5118]: E1208 17:44:19.807372 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:20.3073592 +0000 UTC m=+117.208683284 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-s6hn4" (UID: "1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:19 crc kubenswrapper[5118]: I1208 17:44:19.809453 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-68cf44c8b8-rscz2" event={"ID":"fe85cb02-2d21-4fc3-92c1-6d060a006011","Type":"ContainerStarted","Data":"aaeb3ad835809d04f88f513dc00b05102254350f0c7aef6304aa4091a6f97eba"} Dec 08 17:44:19 crc kubenswrapper[5118]: I1208 17:44:19.815301 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-x7wvx" event={"ID":"39c08b26-3404-4ffd-a53a-c86f0c654db7","Type":"ContainerStarted","Data":"a29f0138192d5f7b697a45fae43a401e19a3d8c209024ecaae04de9e284797e6"} Dec 08 17:44:19 crc kubenswrapper[5118]: I1208 17:44:19.842507 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-5scww"] Dec 08 17:44:19 crc kubenswrapper[5118]: I1208 17:44:19.846708 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-5pp5q" event={"ID":"82728066-0204-4d71-acff-8779194a3e3c","Type":"ContainerStarted","Data":"f627e015c95ea1ee0513c36a0287fcb4c9d5de035dc2e3452f489e480b701515"} Dec 08 17:44:19 crc kubenswrapper[5118]: I1208 17:44:19.852385 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29420250-qhrfp" event={"ID":"742843af-c521-4d4a-beea-e6feae8140e1","Type":"ContainerStarted","Data":"767de89320d755dfb8a7c8272189759333c3065a02a02dd9f10f50995cf940ca"} Dec 08 17:44:19 crc kubenswrapper[5118]: I1208 17:44:19.852416 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29420250-qhrfp" event={"ID":"742843af-c521-4d4a-beea-e6feae8140e1","Type":"ContainerStarted","Data":"e7e4e5294ae9ba605e56ade0a4247be53479964b4053089fd141b6910e3a9015"} Dec 08 17:44:19 crc kubenswrapper[5118]: I1208 17:44:19.863764 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-4g75z" event={"ID":"2ecc2ce3-fe03-4f16-9dfd-4a8b1b2b224f","Type":"ContainerStarted","Data":"8696daabd20626e438660782c406e3a96f0d443dea9a1c48c48ea5acc14fbcb3"} Dec 08 17:44:19 crc kubenswrapper[5118]: I1208 17:44:19.863823 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-4g75z" event={"ID":"2ecc2ce3-fe03-4f16-9dfd-4a8b1b2b224f","Type":"ContainerStarted","Data":"134079b4d4f851ef7758ad94f6c8e53b5aac3957f7ae79005f6514385384f7ed"} Dec 08 17:44:19 crc kubenswrapper[5118]: I1208 17:44:19.895553 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-bhk9x" event={"ID":"28b33fd8-46b7-46e9-bef9-ec6b3f035300","Type":"ContainerStarted","Data":"f7c8bc8f138fcd817df20a77ba3ff651ff2c4a74167a54e4579928c25ec0c21b"} Dec 08 17:44:19 crc kubenswrapper[5118]: I1208 17:44:19.895590 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-bhk9x" event={"ID":"28b33fd8-46b7-46e9-bef9-ec6b3f035300","Type":"ContainerStarted","Data":"c4b331e3708f747d5c861febcad16cb5896d1f3f0a948c754be2e18821ce0619"} Dec 08 17:44:19 crc kubenswrapper[5118]: I1208 17:44:19.913266 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:19 crc kubenswrapper[5118]: E1208 17:44:19.914672 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:20.414655216 +0000 UTC m=+117.315979310 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:19 crc kubenswrapper[5118]: I1208 17:44:19.935506 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-q6lj7" event={"ID":"837f85a8-fff5-46a0-b1d5-2d51271f415a","Type":"ContainerStarted","Data":"d97d4f279c86a04cc1fb5ff196e7d1ffec931124c9c79c1179dd5c6cf46e79fe"} Dec 08 17:44:19 crc kubenswrapper[5118]: I1208 17:44:19.935549 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-q6lj7" event={"ID":"837f85a8-fff5-46a0-b1d5-2d51271f415a","Type":"ContainerStarted","Data":"d2ebdfc8441e7878e8a40568330da7cc9a409e78be428ef0238fe30db4f65e25"} Dec 08 17:44:19 crc kubenswrapper[5118]: I1208 17:44:19.949172 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-54w78" event={"ID":"e666ddb1-3625-4468-9d05-21215b5041c1","Type":"ContainerStarted","Data":"cc8ef7b3262f688f379b890c5c136fad520d130a37d57f8baf21bc1628f38f4f"} Dec 08 17:44:19 crc kubenswrapper[5118]: I1208 17:44:19.953138 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-4kjg6" event={"ID":"085a3a20-9b8f-4448-a4cb-89465f57027c","Type":"ContainerStarted","Data":"6897b984d61654e6d3a250f3c2bbf767dffe9819bb35e5b7cbcf8f25cc9d44bf"} Dec 08 17:44:19 crc kubenswrapper[5118]: I1208 17:44:19.954324 5118 generic.go:358] "Generic (PLEG): container finished" podID="3a9ac21c-f3fb-42c7-a5ce-096d015b8d3c" containerID="691e108021a308b38978654b94504ea1b0ad99573141f2093ba4d98efaa1e223" exitCode=0 Dec 08 17:44:19 crc kubenswrapper[5118]: I1208 17:44:19.955959 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-rdv9c" event={"ID":"3a9ac21c-f3fb-42c7-a5ce-096d015b8d3c","Type":"ContainerDied","Data":"691e108021a308b38978654b94504ea1b0ad99573141f2093ba4d98efaa1e223"} Dec 08 17:44:19 crc kubenswrapper[5118]: I1208 17:44:19.996109 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-85wdh" event={"ID":"9af82654-06bc-4376-bff5-d6adacce9785","Type":"ContainerStarted","Data":"3635ccac4190e9ac4d7e71077ab9092bae6db0a6613f789211d0b6f919a4a49e"} Dec 08 17:44:19 crc kubenswrapper[5118]: I1208 17:44:19.998863 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-68cf44c8b8-rscz2" podStartSLOduration=96.998849183 podStartE2EDuration="1m36.998849183s" podCreationTimestamp="2025-12-08 17:42:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:44:19.997692402 +0000 UTC m=+116.899016496" watchObservedRunningTime="2025-12-08 17:44:19.998849183 +0000 UTC m=+116.900173277" Dec 08 17:44:20 crc kubenswrapper[5118]: I1208 17:44:20.006593 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-6wjgz" event={"ID":"8dcd2702-e20f-439b-b2c7-27095126b87e","Type":"ContainerStarted","Data":"3f14ba348594a64bde7d8f58092259d152ec3a1780af9beb54de7fa70aa50ecb"} Dec 08 17:44:20 crc kubenswrapper[5118]: I1208 17:44:20.006670 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-6wjgz" event={"ID":"8dcd2702-e20f-439b-b2c7-27095126b87e","Type":"ContainerStarted","Data":"a52870906a18720d6272a3d6961d0db095af769bae361b4b65db5b6303cb885d"} Dec 08 17:44:20 crc kubenswrapper[5118]: I1208 17:44:20.008173 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-65b6cccf98-6wjgz" Dec 08 17:44:20 crc kubenswrapper[5118]: I1208 17:44:20.015306 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-s6hn4\" (UID: \"1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc\") " pod="openshift-image-registry/image-registry-66587d64c8-s6hn4" Dec 08 17:44:20 crc kubenswrapper[5118]: E1208 17:44:20.016523 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:20.516509775 +0000 UTC m=+117.417833869 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-s6hn4" (UID: "1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:20 crc kubenswrapper[5118]: I1208 17:44:20.116348 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:20 crc kubenswrapper[5118]: E1208 17:44:20.116546 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:20.616502562 +0000 UTC m=+117.517826656 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:20 crc kubenswrapper[5118]: I1208 17:44:20.117187 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-s6hn4\" (UID: \"1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc\") " pod="openshift-image-registry/image-registry-66587d64c8-s6hn4" Dec 08 17:44:20 crc kubenswrapper[5118]: E1208 17:44:20.119303 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:20.619277178 +0000 UTC m=+117.520601262 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-s6hn4" (UID: "1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:20 crc kubenswrapper[5118]: I1208 17:44:20.220936 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:20 crc kubenswrapper[5118]: E1208 17:44:20.221499 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:20.721476536 +0000 UTC m=+117.622800630 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:20 crc kubenswrapper[5118]: I1208 17:44:20.222054 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-s6hn4\" (UID: \"1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc\") " pod="openshift-image-registry/image-registry-66587d64c8-s6hn4" Dec 08 17:44:20 crc kubenswrapper[5118]: E1208 17:44:20.222665 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:20.722653048 +0000 UTC m=+117.623977142 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-s6hn4" (UID: "1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:20 crc kubenswrapper[5118]: I1208 17:44:20.274607 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-65b6cccf98-6wjgz" Dec 08 17:44:20 crc kubenswrapper[5118]: I1208 17:44:20.317866 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29420250-qhrfp" podStartSLOduration=97.317850114 podStartE2EDuration="1m37.317850114s" podCreationTimestamp="2025-12-08 17:42:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:44:20.315528041 +0000 UTC m=+117.216852135" watchObservedRunningTime="2025-12-08 17:44:20.317850114 +0000 UTC m=+117.219174198" Dec 08 17:44:20 crc kubenswrapper[5118]: I1208 17:44:20.325572 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:20 crc kubenswrapper[5118]: E1208 17:44:20.325940 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:20.825923445 +0000 UTC m=+117.727247539 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:20 crc kubenswrapper[5118]: I1208 17:44:20.359742 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-68cf44c8b8-rscz2" Dec 08 17:44:20 crc kubenswrapper[5118]: I1208 17:44:20.364595 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-6gkgz"] Dec 08 17:44:20 crc kubenswrapper[5118]: I1208 17:44:20.374123 5118 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-rscz2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 17:44:20 crc kubenswrapper[5118]: [-]has-synced failed: reason withheld Dec 08 17:44:20 crc kubenswrapper[5118]: [+]process-running ok Dec 08 17:44:20 crc kubenswrapper[5118]: healthz check failed Dec 08 17:44:20 crc kubenswrapper[5118]: I1208 17:44:20.374186 5118 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-rscz2" podUID="fe85cb02-2d21-4fc3-92c1-6d060a006011" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 17:44:20 crc kubenswrapper[5118]: I1208 17:44:20.433821 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-s6hn4\" (UID: \"1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc\") " pod="openshift-image-registry/image-registry-66587d64c8-s6hn4" Dec 08 17:44:20 crc kubenswrapper[5118]: E1208 17:44:20.434241 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:20.934225509 +0000 UTC m=+117.835549603 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-s6hn4" (UID: "1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:20 crc kubenswrapper[5118]: I1208 17:44:20.511331 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-dhfht" podStartSLOduration=97.511318981 podStartE2EDuration="1m37.511318981s" podCreationTimestamp="2025-12-08 17:42:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:44:20.509235715 +0000 UTC m=+117.410559809" watchObservedRunningTime="2025-12-08 17:44:20.511318981 +0000 UTC m=+117.412643075" Dec 08 17:44:20 crc kubenswrapper[5118]: I1208 17:44:20.545077 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:20 crc kubenswrapper[5118]: E1208 17:44:20.545581 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:21.045558026 +0000 UTC m=+117.946882120 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:20 crc kubenswrapper[5118]: I1208 17:44:20.581261 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-64d44f6ddf-dhfvx"] Dec 08 17:44:20 crc kubenswrapper[5118]: I1208 17:44:20.582901 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-qrls7"] Dec 08 17:44:20 crc kubenswrapper[5118]: I1208 17:44:20.594046 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-v69x6"] Dec 08 17:44:20 crc kubenswrapper[5118]: I1208 17:44:20.596005 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-psjrr"] Dec 08 17:44:20 crc kubenswrapper[5118]: I1208 17:44:20.648384 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-s6hn4\" (UID: \"1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc\") " pod="openshift-image-registry/image-registry-66587d64c8-s6hn4" Dec 08 17:44:20 crc kubenswrapper[5118]: E1208 17:44:20.648753 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:21.14873866 +0000 UTC m=+118.050062754 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-s6hn4" (UID: "1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:20 crc kubenswrapper[5118]: I1208 17:44:20.669342 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-qkg2q" podStartSLOduration=97.669321712 podStartE2EDuration="1m37.669321712s" podCreationTimestamp="2025-12-08 17:42:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:44:20.668112018 +0000 UTC m=+117.569436132" watchObservedRunningTime="2025-12-08 17:44:20.669321712 +0000 UTC m=+117.570645806" Dec 08 17:44:20 crc kubenswrapper[5118]: W1208 17:44:20.714068 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podceb6ea27_6be6_4eb2_8f56_d8ddfa3f0b0b.slice/crio-7e30c5d9c26bffaf27132475d044e517d0468cb10a56a5b9876e62c41fd6908b WatchSource:0}: Error finding container 7e30c5d9c26bffaf27132475d044e517d0468cb10a56a5b9876e62c41fd6908b: Status 404 returned error can't find the container with id 7e30c5d9c26bffaf27132475d044e517d0468cb10a56a5b9876e62c41fd6908b Dec 08 17:44:20 crc kubenswrapper[5118]: W1208 17:44:20.714707 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddbad8204_9790_4f15_a74c_0149d19a4785.slice/crio-6278e7a7e9899b2ad5ce35a00b137621f1dfd9cc576a596b193fa1aabb7e545e WatchSource:0}: Error finding container 6278e7a7e9899b2ad5ce35a00b137621f1dfd9cc576a596b193fa1aabb7e545e: Status 404 returned error can't find the container with id 6278e7a7e9899b2ad5ce35a00b137621f1dfd9cc576a596b193fa1aabb7e545e Dec 08 17:44:20 crc kubenswrapper[5118]: W1208 17:44:20.723131 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc32d3580_29a1_4299_8926_e4c9caa4ff86.slice/crio-2449e51c8bd92a5ae9266cc65003e01d779611a71a071521302420a8b6a964d5 WatchSource:0}: Error finding container 2449e51c8bd92a5ae9266cc65003e01d779611a71a071521302420a8b6a964d5: Status 404 returned error can't find the container with id 2449e51c8bd92a5ae9266cc65003e01d779611a71a071521302420a8b6a964d5 Dec 08 17:44:20 crc kubenswrapper[5118]: I1208 17:44:20.749433 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:20 crc kubenswrapper[5118]: I1208 17:44:20.749655 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-65b6cccf98-6wjgz" podStartSLOduration=97.749641102 podStartE2EDuration="1m37.749641102s" podCreationTimestamp="2025-12-08 17:42:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:44:20.748803729 +0000 UTC m=+117.650127833" watchObservedRunningTime="2025-12-08 17:44:20.749641102 +0000 UTC m=+117.650965196" Dec 08 17:44:20 crc kubenswrapper[5118]: E1208 17:44:20.749995 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:21.249974691 +0000 UTC m=+118.151298795 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:20 crc kubenswrapper[5118]: W1208 17:44:20.780594 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb81b63fd_c7d6_4446_ab93_c62912586002.slice/crio-119b829edb31a9542c4e4278438498a6d436d9ffaf05166df40ae801a6ad750b WatchSource:0}: Error finding container 119b829edb31a9542c4e4278438498a6d436d9ffaf05166df40ae801a6ad750b: Status 404 returned error can't find the container with id 119b829edb31a9542c4e4278438498a6d436d9ffaf05166df40ae801a6ad750b Dec 08 17:44:20 crc kubenswrapper[5118]: I1208 17:44:20.854197 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-q6lj7" podStartSLOduration=97.854181264 podStartE2EDuration="1m37.854181264s" podCreationTimestamp="2025-12-08 17:42:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:44:20.797114437 +0000 UTC m=+117.698438541" watchObservedRunningTime="2025-12-08 17:44:20.854181264 +0000 UTC m=+117.755505358" Dec 08 17:44:20 crc kubenswrapper[5118]: I1208 17:44:20.855923 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-s6hn4\" (UID: \"1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc\") " pod="openshift-image-registry/image-registry-66587d64c8-s6hn4" Dec 08 17:44:20 crc kubenswrapper[5118]: E1208 17:44:20.856752 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:21.356732554 +0000 UTC m=+118.258056648 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-s6hn4" (UID: "1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:20 crc kubenswrapper[5118]: I1208 17:44:20.905376 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-bhk9x" podStartSLOduration=97.90535912 podStartE2EDuration="1m37.90535912s" podCreationTimestamp="2025-12-08 17:42:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:44:20.903688874 +0000 UTC m=+117.805012968" watchObservedRunningTime="2025-12-08 17:44:20.90535912 +0000 UTC m=+117.806683214" Dec 08 17:44:20 crc kubenswrapper[5118]: I1208 17:44:20.907735 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-k26tc"] Dec 08 17:44:20 crc kubenswrapper[5118]: I1208 17:44:20.961578 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:20 crc kubenswrapper[5118]: E1208 17:44:20.961944 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:21.461896302 +0000 UTC m=+118.363220396 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:20 crc kubenswrapper[5118]: I1208 17:44:20.962507 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-s6hn4\" (UID: \"1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc\") " pod="openshift-image-registry/image-registry-66587d64c8-s6hn4" Dec 08 17:44:20 crc kubenswrapper[5118]: E1208 17:44:20.963177 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:21.463166877 +0000 UTC m=+118.364490971 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-s6hn4" (UID: "1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:20 crc kubenswrapper[5118]: I1208 17:44:20.979396 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-ztdrc"] Dec 08 17:44:21 crc kubenswrapper[5118]: I1208 17:44:21.071149 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-2pwhz"] Dec 08 17:44:21 crc kubenswrapper[5118]: I1208 17:44:21.081893 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-p88k2"] Dec 08 17:44:21 crc kubenswrapper[5118]: I1208 17:44:21.089435 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:21 crc kubenswrapper[5118]: E1208 17:44:21.090174 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:21.590155491 +0000 UTC m=+118.491479585 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:21 crc kubenswrapper[5118]: I1208 17:44:21.093560 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-m5ltz"] Dec 08 17:44:21 crc kubenswrapper[5118]: I1208 17:44:21.105458 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-67c89758df-79mps"] Dec 08 17:44:21 crc kubenswrapper[5118]: I1208 17:44:21.112294 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-5scww" event={"ID":"4c48eb41-252c-441b-9506-329d9f6b0371","Type":"ContainerStarted","Data":"9accb4d0366a7a6ee7e967e14b871a878f5bb1961d14d60f8a5f3d145e7ccfef"} Dec 08 17:44:21 crc kubenswrapper[5118]: I1208 17:44:21.163580 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-4kjg6" event={"ID":"085a3a20-9b8f-4448-a4cb-89465f57027c","Type":"ContainerStarted","Data":"a1ec18ba2c33a8e2721cfc00aa598356f52995bff92586962c046a782a1034c6"} Dec 08 17:44:21 crc kubenswrapper[5118]: I1208 17:44:21.166670 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-4kjg6" Dec 08 17:44:21 crc kubenswrapper[5118]: I1208 17:44:21.180824 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-bdhnb" event={"ID":"8ffbd09a-f2fa-4983-84c6-c6db6ccaac49","Type":"ContainerStarted","Data":"1171a1787b5d4f4328c171567ca11f6fbfef3b0d18352bc9c205067f31f864e3"} Dec 08 17:44:21 crc kubenswrapper[5118]: I1208 17:44:21.190490 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-qrls7" event={"ID":"b81b63fd-c7d6-4446-ab93-c62912586002","Type":"ContainerStarted","Data":"119b829edb31a9542c4e4278438498a6d436d9ffaf05166df40ae801a6ad750b"} Dec 08 17:44:21 crc kubenswrapper[5118]: I1208 17:44:21.198909 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-s6hn4\" (UID: \"1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc\") " pod="openshift-image-registry/image-registry-66587d64c8-s6hn4" Dec 08 17:44:21 crc kubenswrapper[5118]: I1208 17:44:21.199282 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-4kjg6" podStartSLOduration=98.199261347 podStartE2EDuration="1m38.199261347s" podCreationTimestamp="2025-12-08 17:42:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:44:21.197687513 +0000 UTC m=+118.099011617" watchObservedRunningTime="2025-12-08 17:44:21.199261347 +0000 UTC m=+118.100585441" Dec 08 17:44:21 crc kubenswrapper[5118]: E1208 17:44:21.199820 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:21.699777531 +0000 UTC m=+118.601101625 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-s6hn4" (UID: "1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:21 crc kubenswrapper[5118]: I1208 17:44:21.200143 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-64d44f6ddf-dhfvx" event={"ID":"a272b1fd-864b-4107-a4fd-6f6ab82a1d34","Type":"ContainerStarted","Data":"137e813a0937cb381bde370d2667c10d162673b57a4dea10c7dc09f970f70b80"} Dec 08 17:44:21 crc kubenswrapper[5118]: W1208 17:44:21.221578 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1bd2df11_789d_4a3f_a7c4_2d6afbe38d0f.slice/crio-07295a2f7b5b3c65edd96320d163fc6e805c5e086dfc16c48b794802b29335a5 WatchSource:0}: Error finding container 07295a2f7b5b3c65edd96320d163fc6e805c5e086dfc16c48b794802b29335a5: Status 404 returned error can't find the container with id 07295a2f7b5b3c65edd96320d163fc6e805c5e086dfc16c48b794802b29335a5 Dec 08 17:44:21 crc kubenswrapper[5118]: I1208 17:44:21.224358 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-c5tbq"] Dec 08 17:44:21 crc kubenswrapper[5118]: I1208 17:44:21.228052 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-85wdh" event={"ID":"9af82654-06bc-4376-bff5-d6adacce9785","Type":"ContainerStarted","Data":"79455a2a0ec6c3aa629647780ba144ff7b1a2c579b6813f48db3d05f01da840e"} Dec 08 17:44:21 crc kubenswrapper[5118]: I1208 17:44:21.229364 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-85wdh" Dec 08 17:44:21 crc kubenswrapper[5118]: I1208 17:44:21.240114 5118 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-85wdh container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/healthz\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Dec 08 17:44:21 crc kubenswrapper[5118]: I1208 17:44:21.240178 5118 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-547dbd544d-85wdh" podUID="9af82654-06bc-4376-bff5-d6adacce9785" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.23:8080/healthz\": dial tcp 10.217.0.23:8080: connect: connection refused" Dec 08 17:44:21 crc kubenswrapper[5118]: I1208 17:44:21.251595 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-74545575db-d69qv" event={"ID":"ada44265-dcab-408c-843e-e5c5a45aa138","Type":"ContainerStarted","Data":"860f26053ce76290b7bc171c60796fd1bc70f38dff02f1d38ba7ca5ff60bc527"} Dec 08 17:44:21 crc kubenswrapper[5118]: I1208 17:44:21.260830 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-547dbd544d-85wdh" podStartSLOduration=98.260814066 podStartE2EDuration="1m38.260814066s" podCreationTimestamp="2025-12-08 17:42:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:44:21.258156873 +0000 UTC m=+118.159480967" watchObservedRunningTime="2025-12-08 17:44:21.260814066 +0000 UTC m=+118.162138160" Dec 08 17:44:21 crc kubenswrapper[5118]: I1208 17:44:21.263430 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-v9sxk" event={"ID":"f5c1e280-e9c9-4a30-bb13-023852fd940b","Type":"ContainerStarted","Data":"3e1c0e4286fa8553cac07df518ffa1ec3df3f536de54e51e9f0e3e2ee97a8a97"} Dec 08 17:44:21 crc kubenswrapper[5118]: W1208 17:44:21.268747 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod78316998_7ca1_4495_997b_bad16252fa84.slice/crio-f53011be2342d2e1df81bdc8b416956b013bd24a064c9fab44113e845132ed40 WatchSource:0}: Error finding container f53011be2342d2e1df81bdc8b416956b013bd24a064c9fab44113e845132ed40: Status 404 returned error can't find the container with id f53011be2342d2e1df81bdc8b416956b013bd24a064c9fab44113e845132ed40 Dec 08 17:44:21 crc kubenswrapper[5118]: I1208 17:44:21.276390 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-psjrr" event={"ID":"c32d3580-29a1-4299-8926-e4c9caa4ff86","Type":"ContainerStarted","Data":"2449e51c8bd92a5ae9266cc65003e01d779611a71a071521302420a8b6a964d5"} Dec 08 17:44:21 crc kubenswrapper[5118]: W1208 17:44:21.278284 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0f90a7a2_721d_4929_a4fa_fd1d2019b4cd.slice/crio-ec4f637da38df73ec133f4025cc97b99242e32adbee5eb6a0c499617f008b5d0 WatchSource:0}: Error finding container ec4f637da38df73ec133f4025cc97b99242e32adbee5eb6a0c499617f008b5d0: Status 404 returned error can't find the container with id ec4f637da38df73ec133f4025cc97b99242e32adbee5eb6a0c499617f008b5d0 Dec 08 17:44:21 crc kubenswrapper[5118]: I1208 17:44:21.286412 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-2cnx5" event={"ID":"f22fa87e-79cb-498c-a2ab-166d47fd70a5","Type":"ContainerStarted","Data":"05562bed0a58785cbffd80e5e63ed8943b1bccf2f61dbd7cf94aec4efa9e38cf"} Dec 08 17:44:21 crc kubenswrapper[5118]: W1208 17:44:21.290271 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1125cbf4_59e9_464e_8305_d2fc133ae675.slice/crio-010f527922a84c158a643229ed5cb60f8dc1f0dddf8d575e30ead9ed434fdc86 WatchSource:0}: Error finding container 010f527922a84c158a643229ed5cb60f8dc1f0dddf8d575e30ead9ed434fdc86: Status 404 returned error can't find the container with id 010f527922a84c158a643229ed5cb60f8dc1f0dddf8d575e30ead9ed434fdc86 Dec 08 17:44:21 crc kubenswrapper[5118]: I1208 17:44:21.291825 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-cdz4v" event={"ID":"2554c491-6bfb-47fd-9b76-c1da12e702b1","Type":"ContainerStarted","Data":"110edd92de6738b6a2f2fcdb7c3fb80c9e48740058e8fbf9830549678e450e9e"} Dec 08 17:44:21 crc kubenswrapper[5118]: I1208 17:44:21.296591 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-bl822" event={"ID":"9a815eca-9800-4b68-adc1-5953173f4427","Type":"ContainerStarted","Data":"518405f96101746b0bc0cd77bc85cab252964c4a248207c6a56847414c3fd366"} Dec 08 17:44:21 crc kubenswrapper[5118]: I1208 17:44:21.299170 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-bl822" Dec 08 17:44:21 crc kubenswrapper[5118]: I1208 17:44:21.301396 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:21 crc kubenswrapper[5118]: E1208 17:44:21.302952 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:21.802932405 +0000 UTC m=+118.704256509 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:21 crc kubenswrapper[5118]: I1208 17:44:21.323720 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-bl822" Dec 08 17:44:21 crc kubenswrapper[5118]: I1208 17:44:21.325805 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-cdz4v" podStartSLOduration=98.325780438 podStartE2EDuration="1m38.325780438s" podCreationTimestamp="2025-12-08 17:42:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:44:21.324952725 +0000 UTC m=+118.226276819" watchObservedRunningTime="2025-12-08 17:44:21.325780438 +0000 UTC m=+118.227104532" Dec 08 17:44:21 crc kubenswrapper[5118]: I1208 17:44:21.353733 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-5httz" event={"ID":"1a749ad3-837c-4804-b23c-2abb017b5b82","Type":"ContainerStarted","Data":"8cf32ec666698450c5e41a28e9854684e672a190f9b30154685fbcf02ec6fc55"} Dec 08 17:44:21 crc kubenswrapper[5118]: I1208 17:44:21.360911 5118 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-rscz2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 17:44:21 crc kubenswrapper[5118]: [-]has-synced failed: reason withheld Dec 08 17:44:21 crc kubenswrapper[5118]: [+]process-running ok Dec 08 17:44:21 crc kubenswrapper[5118]: healthz check failed Dec 08 17:44:21 crc kubenswrapper[5118]: I1208 17:44:21.360984 5118 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-rscz2" podUID="fe85cb02-2d21-4fc3-92c1-6d060a006011" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 17:44:21 crc kubenswrapper[5118]: I1208 17:44:21.369804 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-v69x6" event={"ID":"ceb6ea27-6be6-4eb2-8f56-d8ddfa3f0b0b","Type":"ContainerStarted","Data":"7e30c5d9c26bffaf27132475d044e517d0468cb10a56a5b9876e62c41fd6908b"} Dec 08 17:44:21 crc kubenswrapper[5118]: I1208 17:44:21.385574 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-6gkgz" event={"ID":"dbad8204-9790-4f15-a74c-0149d19a4785","Type":"ContainerStarted","Data":"6278e7a7e9899b2ad5ce35a00b137621f1dfd9cc576a596b193fa1aabb7e545e"} Dec 08 17:44:21 crc kubenswrapper[5118]: I1208 17:44:21.406916 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-s6hn4\" (UID: \"1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc\") " pod="openshift-image-registry/image-registry-66587d64c8-s6hn4" Dec 08 17:44:21 crc kubenswrapper[5118]: E1208 17:44:21.414914 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:21.914887629 +0000 UTC m=+118.816211723 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-s6hn4" (UID: "1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:21 crc kubenswrapper[5118]: I1208 17:44:21.417236 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-ggh59" event={"ID":"c987ac4d-5129-45aa-afe4-ab42b6907462","Type":"ContainerStarted","Data":"044dfd7745fd201d9ca0be6708e7ec266db6338da8c530c6a9a94e1a77e85897"} Dec 08 17:44:21 crc kubenswrapper[5118]: I1208 17:44:21.417911 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-755bb95488-5httz" podStartSLOduration=98.41789123 podStartE2EDuration="1m38.41789123s" podCreationTimestamp="2025-12-08 17:42:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:44:21.417662674 +0000 UTC m=+118.318986768" watchObservedRunningTime="2025-12-08 17:44:21.41789123 +0000 UTC m=+118.319215324" Dec 08 17:44:21 crc kubenswrapper[5118]: I1208 17:44:21.428859 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-bl822" podStartSLOduration=98.428832869 podStartE2EDuration="1m38.428832869s" podCreationTimestamp="2025-12-08 17:42:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:44:21.371658609 +0000 UTC m=+118.272982703" watchObservedRunningTime="2025-12-08 17:44:21.428832869 +0000 UTC m=+118.330156963" Dec 08 17:44:21 crc kubenswrapper[5118]: I1208 17:44:21.430183 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-ggh59" Dec 08 17:44:21 crc kubenswrapper[5118]: I1208 17:44:21.433212 5118 patch_prober.go:28] interesting pod/olm-operator-5cdf44d969-ggh59 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.26:8443/healthz\": dial tcp 10.217.0.26:8443: connect: connection refused" start-of-body= Dec 08 17:44:21 crc kubenswrapper[5118]: I1208 17:44:21.434710 5118 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-ggh59" podUID="c987ac4d-5129-45aa-afe4-ab42b6907462" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.26:8443/healthz\": dial tcp 10.217.0.26:8443: connect: connection refused" Dec 08 17:44:21 crc kubenswrapper[5118]: I1208 17:44:21.462678 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-ggh59" podStartSLOduration=98.462640421 podStartE2EDuration="1m38.462640421s" podCreationTimestamp="2025-12-08 17:42:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:44:21.4559855 +0000 UTC m=+118.357309594" watchObservedRunningTime="2025-12-08 17:44:21.462640421 +0000 UTC m=+118.363964505" Dec 08 17:44:21 crc kubenswrapper[5118]: I1208 17:44:21.500453 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-psb45" event={"ID":"d549986a-81c9-4cd0-86b0-61e4b6700ddf","Type":"ContainerStarted","Data":"721c3f4b7cd8bb68925aacdafa18a9c37430c04da44f9db3f00d1f88a27762e8"} Dec 08 17:44:21 crc kubenswrapper[5118]: I1208 17:44:21.500543 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-9b988" event={"ID":"6be72eaf-a179-4e2b-a12d-4b5dbb213183","Type":"ContainerStarted","Data":"171be8675a537f5c91f07695f1d02d53877dd94bf1dea71665144e6537718f47"} Dec 08 17:44:21 crc kubenswrapper[5118]: I1208 17:44:21.508615 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:21 crc kubenswrapper[5118]: E1208 17:44:21.509647 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:22.009611922 +0000 UTC m=+118.910936016 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:21 crc kubenswrapper[5118]: I1208 17:44:21.511625 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-rwgjl" event={"ID":"1cd09f9c-6a6f-438a-a982-082edc35a55c","Type":"ContainerStarted","Data":"c432fb564191dc1677ec4262eb92512120aaf51382e3ebf58be7ad2bd0d28836"} Dec 08 17:44:21 crc kubenswrapper[5118]: I1208 17:44:21.553062 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/downloads-747b44746d-x7wvx" Dec 08 17:44:21 crc kubenswrapper[5118]: I1208 17:44:21.573652 5118 patch_prober.go:28] interesting pod/downloads-747b44746d-x7wvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.35:8080/\": dial tcp 10.217.0.35:8080: connect: connection refused" start-of-body= Dec 08 17:44:21 crc kubenswrapper[5118]: I1208 17:44:21.573896 5118 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-x7wvx" podUID="39c08b26-3404-4ffd-a53a-c86f0c654db7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.35:8080/\": dial tcp 10.217.0.35:8080: connect: connection refused" Dec 08 17:44:21 crc kubenswrapper[5118]: I1208 17:44:21.581431 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-rwgjl" podStartSLOduration=98.58141263 podStartE2EDuration="1m38.58141263s" podCreationTimestamp="2025-12-08 17:42:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:44:21.542351525 +0000 UTC m=+118.443675609" watchObservedRunningTime="2025-12-08 17:44:21.58141263 +0000 UTC m=+118.482736724" Dec 08 17:44:21 crc kubenswrapper[5118]: I1208 17:44:21.582599 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-747b44746d-x7wvx" podStartSLOduration=98.582594043 podStartE2EDuration="1m38.582594043s" podCreationTimestamp="2025-12-08 17:42:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:44:21.580742942 +0000 UTC m=+118.482067036" watchObservedRunningTime="2025-12-08 17:44:21.582594043 +0000 UTC m=+118.483918137" Dec 08 17:44:21 crc kubenswrapper[5118]: I1208 17:44:21.597776 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-5pp5q" event={"ID":"82728066-0204-4d71-acff-8779194a3e3c","Type":"ContainerStarted","Data":"593f79dabc610f2bbaf97861a67d8d70f77739b85c763ba7a2c9303c2cb845d1"} Dec 08 17:44:21 crc kubenswrapper[5118]: I1208 17:44:21.610870 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-s6hn4\" (UID: \"1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc\") " pod="openshift-image-registry/image-registry-66587d64c8-s6hn4" Dec 08 17:44:21 crc kubenswrapper[5118]: E1208 17:44:21.612449 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:22.112430836 +0000 UTC m=+119.013754930 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-s6hn4" (UID: "1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:21 crc kubenswrapper[5118]: I1208 17:44:21.631712 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-4g75z" event={"ID":"2ecc2ce3-fe03-4f16-9dfd-4a8b1b2b224f","Type":"ContainerStarted","Data":"e94e110c299e4d114ef1037b22649315debde89819b88b7da4a3489cce9128a5"} Dec 08 17:44:21 crc kubenswrapper[5118]: I1208 17:44:21.656616 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-qkg2q" Dec 08 17:44:21 crc kubenswrapper[5118]: I1208 17:44:21.657012 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-4g75z" podStartSLOduration=98.656991552 podStartE2EDuration="1m38.656991552s" podCreationTimestamp="2025-12-08 17:42:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:44:21.65692108 +0000 UTC m=+118.558245174" watchObservedRunningTime="2025-12-08 17:44:21.656991552 +0000 UTC m=+118.558315646" Dec 08 17:44:21 crc kubenswrapper[5118]: I1208 17:44:21.658209 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-5pp5q" podStartSLOduration=98.658202396 podStartE2EDuration="1m38.658202396s" podCreationTimestamp="2025-12-08 17:42:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:44:21.623975251 +0000 UTC m=+118.525299345" watchObservedRunningTime="2025-12-08 17:44:21.658202396 +0000 UTC m=+118.559526490" Dec 08 17:44:21 crc kubenswrapper[5118]: I1208 17:44:21.714098 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:21 crc kubenswrapper[5118]: E1208 17:44:21.714370 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:22.214327946 +0000 UTC m=+119.115652040 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:21 crc kubenswrapper[5118]: I1208 17:44:21.714758 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-s6hn4\" (UID: \"1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc\") " pod="openshift-image-registry/image-registry-66587d64c8-s6hn4" Dec 08 17:44:21 crc kubenswrapper[5118]: E1208 17:44:21.716833 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:22.216820955 +0000 UTC m=+119.118145039 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-s6hn4" (UID: "1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:21 crc kubenswrapper[5118]: I1208 17:44:21.820598 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:21 crc kubenswrapper[5118]: E1208 17:44:21.820941 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:22.320920994 +0000 UTC m=+119.222245088 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:21 crc kubenswrapper[5118]: I1208 17:44:21.821324 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-s6hn4\" (UID: \"1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc\") " pod="openshift-image-registry/image-registry-66587d64c8-s6hn4" Dec 08 17:44:21 crc kubenswrapper[5118]: E1208 17:44:21.821700 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:22.321684805 +0000 UTC m=+119.223008899 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-s6hn4" (UID: "1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:21 crc kubenswrapper[5118]: I1208 17:44:21.902292 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-r22jf"] Dec 08 17:44:21 crc kubenswrapper[5118]: I1208 17:44:21.920923 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-r22jf"] Dec 08 17:44:21 crc kubenswrapper[5118]: I1208 17:44:21.921328 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r22jf" Dec 08 17:44:21 crc kubenswrapper[5118]: I1208 17:44:21.924709 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:21 crc kubenswrapper[5118]: E1208 17:44:21.925150 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:22.425119726 +0000 UTC m=+119.326443810 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:21 crc kubenswrapper[5118]: I1208 17:44:21.925279 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-s6hn4\" (UID: \"1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc\") " pod="openshift-image-registry/image-registry-66587d64c8-s6hn4" Dec 08 17:44:21 crc kubenswrapper[5118]: E1208 17:44:21.925800 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:22.425792374 +0000 UTC m=+119.327116468 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-s6hn4" (UID: "1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:21 crc kubenswrapper[5118]: I1208 17:44:21.931343 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Dec 08 17:44:22 crc kubenswrapper[5118]: I1208 17:44:22.033040 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:22 crc kubenswrapper[5118]: I1208 17:44:22.033182 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb8303fe-2019-44f4-a124-af174b28cc02-utilities\") pod \"community-operators-r22jf\" (UID: \"cb8303fe-2019-44f4-a124-af174b28cc02\") " pod="openshift-marketplace/community-operators-r22jf" Dec 08 17:44:22 crc kubenswrapper[5118]: E1208 17:44:22.033232 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:22.533205625 +0000 UTC m=+119.434529719 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:22 crc kubenswrapper[5118]: I1208 17:44:22.033321 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-s6hn4\" (UID: \"1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc\") " pod="openshift-image-registry/image-registry-66587d64c8-s6hn4" Dec 08 17:44:22 crc kubenswrapper[5118]: I1208 17:44:22.033347 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb8303fe-2019-44f4-a124-af174b28cc02-catalog-content\") pod \"community-operators-r22jf\" (UID: \"cb8303fe-2019-44f4-a124-af174b28cc02\") " pod="openshift-marketplace/community-operators-r22jf" Dec 08 17:44:22 crc kubenswrapper[5118]: I1208 17:44:22.033476 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4p2p\" (UniqueName: \"kubernetes.io/projected/cb8303fe-2019-44f4-a124-af174b28cc02-kube-api-access-k4p2p\") pod \"community-operators-r22jf\" (UID: \"cb8303fe-2019-44f4-a124-af174b28cc02\") " pod="openshift-marketplace/community-operators-r22jf" Dec 08 17:44:22 crc kubenswrapper[5118]: E1208 17:44:22.033803 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:22.533794451 +0000 UTC m=+119.435118545 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-s6hn4" (UID: "1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:22 crc kubenswrapper[5118]: I1208 17:44:22.073138 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-lxwl6"] Dec 08 17:44:22 crc kubenswrapper[5118]: I1208 17:44:22.083971 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lxwl6" Dec 08 17:44:22 crc kubenswrapper[5118]: I1208 17:44:22.089365 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Dec 08 17:44:22 crc kubenswrapper[5118]: I1208 17:44:22.091944 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lxwl6"] Dec 08 17:44:22 crc kubenswrapper[5118]: I1208 17:44:22.135550 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:22 crc kubenswrapper[5118]: I1208 17:44:22.135790 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-k4p2p\" (UniqueName: \"kubernetes.io/projected/cb8303fe-2019-44f4-a124-af174b28cc02-kube-api-access-k4p2p\") pod \"community-operators-r22jf\" (UID: \"cb8303fe-2019-44f4-a124-af174b28cc02\") " pod="openshift-marketplace/community-operators-r22jf" Dec 08 17:44:22 crc kubenswrapper[5118]: I1208 17:44:22.135855 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb8303fe-2019-44f4-a124-af174b28cc02-utilities\") pod \"community-operators-r22jf\" (UID: \"cb8303fe-2019-44f4-a124-af174b28cc02\") " pod="openshift-marketplace/community-operators-r22jf" Dec 08 17:44:22 crc kubenswrapper[5118]: E1208 17:44:22.137199 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:22.63717315 +0000 UTC m=+119.538497244 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:22 crc kubenswrapper[5118]: I1208 17:44:22.137785 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-s6hn4\" (UID: \"1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc\") " pod="openshift-image-registry/image-registry-66587d64c8-s6hn4" Dec 08 17:44:22 crc kubenswrapper[5118]: I1208 17:44:22.137824 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb8303fe-2019-44f4-a124-af174b28cc02-catalog-content\") pod \"community-operators-r22jf\" (UID: \"cb8303fe-2019-44f4-a124-af174b28cc02\") " pod="openshift-marketplace/community-operators-r22jf" Dec 08 17:44:22 crc kubenswrapper[5118]: I1208 17:44:22.138311 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb8303fe-2019-44f4-a124-af174b28cc02-catalog-content\") pod \"community-operators-r22jf\" (UID: \"cb8303fe-2019-44f4-a124-af174b28cc02\") " pod="openshift-marketplace/community-operators-r22jf" Dec 08 17:44:22 crc kubenswrapper[5118]: E1208 17:44:22.138407 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:22.638395993 +0000 UTC m=+119.539720087 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-s6hn4" (UID: "1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:22 crc kubenswrapper[5118]: I1208 17:44:22.138778 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb8303fe-2019-44f4-a124-af174b28cc02-utilities\") pod \"community-operators-r22jf\" (UID: \"cb8303fe-2019-44f4-a124-af174b28cc02\") " pod="openshift-marketplace/community-operators-r22jf" Dec 08 17:44:22 crc kubenswrapper[5118]: I1208 17:44:22.166554 5118 patch_prober.go:28] interesting pod/packageserver-7d4fc7d867-4kjg6 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.16:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 08 17:44:22 crc kubenswrapper[5118]: I1208 17:44:22.166623 5118 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-4kjg6" podUID="085a3a20-9b8f-4448-a4cb-89465f57027c" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.16:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 08 17:44:22 crc kubenswrapper[5118]: I1208 17:44:22.172222 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4p2p\" (UniqueName: \"kubernetes.io/projected/cb8303fe-2019-44f4-a124-af174b28cc02-kube-api-access-k4p2p\") pod \"community-operators-r22jf\" (UID: \"cb8303fe-2019-44f4-a124-af174b28cc02\") " pod="openshift-marketplace/community-operators-r22jf" Dec 08 17:44:22 crc kubenswrapper[5118]: I1208 17:44:22.238617 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:22 crc kubenswrapper[5118]: I1208 17:44:22.238810 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe8486ce-b0ff-43e5-b2a4-3e1d81feeebf-catalog-content\") pod \"certified-operators-lxwl6\" (UID: \"fe8486ce-b0ff-43e5-b2a4-3e1d81feeebf\") " pod="openshift-marketplace/certified-operators-lxwl6" Dec 08 17:44:22 crc kubenswrapper[5118]: I1208 17:44:22.238857 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ht7kr\" (UniqueName: \"kubernetes.io/projected/fe8486ce-b0ff-43e5-b2a4-3e1d81feeebf-kube-api-access-ht7kr\") pod \"certified-operators-lxwl6\" (UID: \"fe8486ce-b0ff-43e5-b2a4-3e1d81feeebf\") " pod="openshift-marketplace/certified-operators-lxwl6" Dec 08 17:44:22 crc kubenswrapper[5118]: I1208 17:44:22.238898 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe8486ce-b0ff-43e5-b2a4-3e1d81feeebf-utilities\") pod \"certified-operators-lxwl6\" (UID: \"fe8486ce-b0ff-43e5-b2a4-3e1d81feeebf\") " pod="openshift-marketplace/certified-operators-lxwl6" Dec 08 17:44:22 crc kubenswrapper[5118]: E1208 17:44:22.239161 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:22.739128731 +0000 UTC m=+119.640452825 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:22 crc kubenswrapper[5118]: I1208 17:44:22.287793 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-sb7gg"] Dec 08 17:44:22 crc kubenswrapper[5118]: I1208 17:44:22.291476 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sb7gg" Dec 08 17:44:22 crc kubenswrapper[5118]: I1208 17:44:22.314025 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r22jf" Dec 08 17:44:22 crc kubenswrapper[5118]: I1208 17:44:22.321429 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sb7gg"] Dec 08 17:44:22 crc kubenswrapper[5118]: I1208 17:44:22.342588 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe8486ce-b0ff-43e5-b2a4-3e1d81feeebf-catalog-content\") pod \"certified-operators-lxwl6\" (UID: \"fe8486ce-b0ff-43e5-b2a4-3e1d81feeebf\") " pod="openshift-marketplace/certified-operators-lxwl6" Dec 08 17:44:22 crc kubenswrapper[5118]: I1208 17:44:22.342649 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-s6hn4\" (UID: \"1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc\") " pod="openshift-image-registry/image-registry-66587d64c8-s6hn4" Dec 08 17:44:22 crc kubenswrapper[5118]: I1208 17:44:22.342670 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ht7kr\" (UniqueName: \"kubernetes.io/projected/fe8486ce-b0ff-43e5-b2a4-3e1d81feeebf-kube-api-access-ht7kr\") pod \"certified-operators-lxwl6\" (UID: \"fe8486ce-b0ff-43e5-b2a4-3e1d81feeebf\") " pod="openshift-marketplace/certified-operators-lxwl6" Dec 08 17:44:22 crc kubenswrapper[5118]: I1208 17:44:22.342698 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe8486ce-b0ff-43e5-b2a4-3e1d81feeebf-utilities\") pod \"certified-operators-lxwl6\" (UID: \"fe8486ce-b0ff-43e5-b2a4-3e1d81feeebf\") " pod="openshift-marketplace/certified-operators-lxwl6" Dec 08 17:44:22 crc kubenswrapper[5118]: I1208 17:44:22.343621 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe8486ce-b0ff-43e5-b2a4-3e1d81feeebf-utilities\") pod \"certified-operators-lxwl6\" (UID: \"fe8486ce-b0ff-43e5-b2a4-3e1d81feeebf\") " pod="openshift-marketplace/certified-operators-lxwl6" Dec 08 17:44:22 crc kubenswrapper[5118]: I1208 17:44:22.343831 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe8486ce-b0ff-43e5-b2a4-3e1d81feeebf-catalog-content\") pod \"certified-operators-lxwl6\" (UID: \"fe8486ce-b0ff-43e5-b2a4-3e1d81feeebf\") " pod="openshift-marketplace/certified-operators-lxwl6" Dec 08 17:44:22 crc kubenswrapper[5118]: E1208 17:44:22.344092 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:22.844081084 +0000 UTC m=+119.745405168 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-s6hn4" (UID: "1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:22 crc kubenswrapper[5118]: I1208 17:44:22.362608 5118 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-rscz2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 17:44:22 crc kubenswrapper[5118]: [-]has-synced failed: reason withheld Dec 08 17:44:22 crc kubenswrapper[5118]: [+]process-running ok Dec 08 17:44:22 crc kubenswrapper[5118]: healthz check failed Dec 08 17:44:22 crc kubenswrapper[5118]: I1208 17:44:22.362668 5118 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-rscz2" podUID="fe85cb02-2d21-4fc3-92c1-6d060a006011" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 17:44:22 crc kubenswrapper[5118]: I1208 17:44:22.395092 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ht7kr\" (UniqueName: \"kubernetes.io/projected/fe8486ce-b0ff-43e5-b2a4-3e1d81feeebf-kube-api-access-ht7kr\") pod \"certified-operators-lxwl6\" (UID: \"fe8486ce-b0ff-43e5-b2a4-3e1d81feeebf\") " pod="openshift-marketplace/certified-operators-lxwl6" Dec 08 17:44:22 crc kubenswrapper[5118]: I1208 17:44:22.417677 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lxwl6" Dec 08 17:44:22 crc kubenswrapper[5118]: I1208 17:44:22.448426 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:22 crc kubenswrapper[5118]: I1208 17:44:22.448620 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4f4fc3c-88d2-455a-a8d2-209388238c9a-catalog-content\") pod \"community-operators-sb7gg\" (UID: \"e4f4fc3c-88d2-455a-a8d2-209388238c9a\") " pod="openshift-marketplace/community-operators-sb7gg" Dec 08 17:44:22 crc kubenswrapper[5118]: I1208 17:44:22.448645 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4f4fc3c-88d2-455a-a8d2-209388238c9a-utilities\") pod \"community-operators-sb7gg\" (UID: \"e4f4fc3c-88d2-455a-a8d2-209388238c9a\") " pod="openshift-marketplace/community-operators-sb7gg" Dec 08 17:44:22 crc kubenswrapper[5118]: I1208 17:44:22.448716 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hb2h7\" (UniqueName: \"kubernetes.io/projected/e4f4fc3c-88d2-455a-a8d2-209388238c9a-kube-api-access-hb2h7\") pod \"community-operators-sb7gg\" (UID: \"e4f4fc3c-88d2-455a-a8d2-209388238c9a\") " pod="openshift-marketplace/community-operators-sb7gg" Dec 08 17:44:22 crc kubenswrapper[5118]: E1208 17:44:22.448895 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:22.948859022 +0000 UTC m=+119.850183116 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:22 crc kubenswrapper[5118]: I1208 17:44:22.474250 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-n5vp7"] Dec 08 17:44:22 crc kubenswrapper[5118]: I1208 17:44:22.485116 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-n5vp7" Dec 08 17:44:22 crc kubenswrapper[5118]: I1208 17:44:22.494657 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-n5vp7"] Dec 08 17:44:22 crc kubenswrapper[5118]: I1208 17:44:22.555134 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4f4fc3c-88d2-455a-a8d2-209388238c9a-catalog-content\") pod \"community-operators-sb7gg\" (UID: \"e4f4fc3c-88d2-455a-a8d2-209388238c9a\") " pod="openshift-marketplace/community-operators-sb7gg" Dec 08 17:44:22 crc kubenswrapper[5118]: I1208 17:44:22.555464 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4f4fc3c-88d2-455a-a8d2-209388238c9a-utilities\") pod \"community-operators-sb7gg\" (UID: \"e4f4fc3c-88d2-455a-a8d2-209388238c9a\") " pod="openshift-marketplace/community-operators-sb7gg" Dec 08 17:44:22 crc kubenswrapper[5118]: I1208 17:44:22.555551 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-s6hn4\" (UID: \"1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc\") " pod="openshift-image-registry/image-registry-66587d64c8-s6hn4" Dec 08 17:44:22 crc kubenswrapper[5118]: I1208 17:44:22.555658 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hb2h7\" (UniqueName: \"kubernetes.io/projected/e4f4fc3c-88d2-455a-a8d2-209388238c9a-kube-api-access-hb2h7\") pod \"community-operators-sb7gg\" (UID: \"e4f4fc3c-88d2-455a-a8d2-209388238c9a\") " pod="openshift-marketplace/community-operators-sb7gg" Dec 08 17:44:22 crc kubenswrapper[5118]: I1208 17:44:22.555770 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zj2mc\" (UniqueName: \"kubernetes.io/projected/8c05f773-74bd-433b-84ce-a7f5430d9b55-kube-api-access-zj2mc\") pod \"certified-operators-n5vp7\" (UID: \"8c05f773-74bd-433b-84ce-a7f5430d9b55\") " pod="openshift-marketplace/certified-operators-n5vp7" Dec 08 17:44:22 crc kubenswrapper[5118]: I1208 17:44:22.555838 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c05f773-74bd-433b-84ce-a7f5430d9b55-utilities\") pod \"certified-operators-n5vp7\" (UID: \"8c05f773-74bd-433b-84ce-a7f5430d9b55\") " pod="openshift-marketplace/certified-operators-n5vp7" Dec 08 17:44:22 crc kubenswrapper[5118]: I1208 17:44:22.555940 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c05f773-74bd-433b-84ce-a7f5430d9b55-catalog-content\") pod \"certified-operators-n5vp7\" (UID: \"8c05f773-74bd-433b-84ce-a7f5430d9b55\") " pod="openshift-marketplace/certified-operators-n5vp7" Dec 08 17:44:22 crc kubenswrapper[5118]: I1208 17:44:22.556380 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4f4fc3c-88d2-455a-a8d2-209388238c9a-catalog-content\") pod \"community-operators-sb7gg\" (UID: \"e4f4fc3c-88d2-455a-a8d2-209388238c9a\") " pod="openshift-marketplace/community-operators-sb7gg" Dec 08 17:44:22 crc kubenswrapper[5118]: I1208 17:44:22.556660 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4f4fc3c-88d2-455a-a8d2-209388238c9a-utilities\") pod \"community-operators-sb7gg\" (UID: \"e4f4fc3c-88d2-455a-a8d2-209388238c9a\") " pod="openshift-marketplace/community-operators-sb7gg" Dec 08 17:44:22 crc kubenswrapper[5118]: E1208 17:44:22.557004 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:23.056992192 +0000 UTC m=+119.958316286 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-s6hn4" (UID: "1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:22 crc kubenswrapper[5118]: I1208 17:44:22.599765 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hb2h7\" (UniqueName: \"kubernetes.io/projected/e4f4fc3c-88d2-455a-a8d2-209388238c9a-kube-api-access-hb2h7\") pod \"community-operators-sb7gg\" (UID: \"e4f4fc3c-88d2-455a-a8d2-209388238c9a\") " pod="openshift-marketplace/community-operators-sb7gg" Dec 08 17:44:22 crc kubenswrapper[5118]: I1208 17:44:22.648185 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sb7gg" Dec 08 17:44:22 crc kubenswrapper[5118]: I1208 17:44:22.656809 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:22 crc kubenswrapper[5118]: I1208 17:44:22.656972 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zj2mc\" (UniqueName: \"kubernetes.io/projected/8c05f773-74bd-433b-84ce-a7f5430d9b55-kube-api-access-zj2mc\") pod \"certified-operators-n5vp7\" (UID: \"8c05f773-74bd-433b-84ce-a7f5430d9b55\") " pod="openshift-marketplace/certified-operators-n5vp7" Dec 08 17:44:22 crc kubenswrapper[5118]: I1208 17:44:22.656996 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c05f773-74bd-433b-84ce-a7f5430d9b55-utilities\") pod \"certified-operators-n5vp7\" (UID: \"8c05f773-74bd-433b-84ce-a7f5430d9b55\") " pod="openshift-marketplace/certified-operators-n5vp7" Dec 08 17:44:22 crc kubenswrapper[5118]: I1208 17:44:22.657027 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c05f773-74bd-433b-84ce-a7f5430d9b55-catalog-content\") pod \"certified-operators-n5vp7\" (UID: \"8c05f773-74bd-433b-84ce-a7f5430d9b55\") " pod="openshift-marketplace/certified-operators-n5vp7" Dec 08 17:44:22 crc kubenswrapper[5118]: I1208 17:44:22.657487 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c05f773-74bd-433b-84ce-a7f5430d9b55-catalog-content\") pod \"certified-operators-n5vp7\" (UID: \"8c05f773-74bd-433b-84ce-a7f5430d9b55\") " pod="openshift-marketplace/certified-operators-n5vp7" Dec 08 17:44:22 crc kubenswrapper[5118]: E1208 17:44:22.657568 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:23.157546184 +0000 UTC m=+120.058870278 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:22 crc kubenswrapper[5118]: I1208 17:44:22.658237 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c05f773-74bd-433b-84ce-a7f5430d9b55-utilities\") pod \"certified-operators-n5vp7\" (UID: \"8c05f773-74bd-433b-84ce-a7f5430d9b55\") " pod="openshift-marketplace/certified-operators-n5vp7" Dec 08 17:44:22 crc kubenswrapper[5118]: I1208 17:44:22.694772 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zj2mc\" (UniqueName: \"kubernetes.io/projected/8c05f773-74bd-433b-84ce-a7f5430d9b55-kube-api-access-zj2mc\") pod \"certified-operators-n5vp7\" (UID: \"8c05f773-74bd-433b-84ce-a7f5430d9b55\") " pod="openshift-marketplace/certified-operators-n5vp7" Dec 08 17:44:22 crc kubenswrapper[5118]: I1208 17:44:22.759129 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-64d44f6ddf-dhfvx" event={"ID":"a272b1fd-864b-4107-a4fd-6f6ab82a1d34","Type":"ContainerStarted","Data":"a0851aa11b5974f566fd150cdfab9987f07f3a45a053ceb868414ea97382e3f6"} Dec 08 17:44:22 crc kubenswrapper[5118]: I1208 17:44:22.764256 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-s6hn4\" (UID: \"1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc\") " pod="openshift-image-registry/image-registry-66587d64c8-s6hn4" Dec 08 17:44:22 crc kubenswrapper[5118]: E1208 17:44:22.765038 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:23.265013436 +0000 UTC m=+120.166337540 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-s6hn4" (UID: "1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:22 crc kubenswrapper[5118]: I1208 17:44:22.784186 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-2pwhz" event={"ID":"0157c9d2-3779-46c8-9da9-1fffa52986a6","Type":"ContainerStarted","Data":"a9ad2d4af15a7b02d34660f0fa4da4372133dce7f21251719f810bbbdaa76756"} Dec 08 17:44:22 crc kubenswrapper[5118]: I1208 17:44:22.784256 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-2pwhz" event={"ID":"0157c9d2-3779-46c8-9da9-1fffa52986a6","Type":"ContainerStarted","Data":"7e54acff5cfb3f98418c71eda57c0e86ac7ddac7f2c3cafad91c274492ac084b"} Dec 08 17:44:22 crc kubenswrapper[5118]: I1208 17:44:22.785485 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-64d44f6ddf-dhfvx" podStartSLOduration=99.785469904 podStartE2EDuration="1m39.785469904s" podCreationTimestamp="2025-12-08 17:42:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:44:22.781588918 +0000 UTC m=+119.682913022" watchObservedRunningTime="2025-12-08 17:44:22.785469904 +0000 UTC m=+119.686793998" Dec 08 17:44:22 crc kubenswrapper[5118]: I1208 17:44:22.797522 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-gvb6q" event={"ID":"a52d6e07-c08e-4424-8a3f-50052c311604","Type":"ContainerStarted","Data":"df5e616f9c6ff73ae0ee44a91beaf4a9420fd136403140bab12e394f03d6b968"} Dec 08 17:44:22 crc kubenswrapper[5118]: I1208 17:44:22.805841 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-psjrr" event={"ID":"c32d3580-29a1-4299-8926-e4c9caa4ff86","Type":"ContainerStarted","Data":"e29ea2212bf35d3284d2abdd207f3c83c65ab879b16fea9c6554ac7e685c05b3"} Dec 08 17:44:22 crc kubenswrapper[5118]: I1208 17:44:22.817168 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-gvb6q" podStartSLOduration=99.817142997 podStartE2EDuration="1m39.817142997s" podCreationTimestamp="2025-12-08 17:42:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:44:22.81280555 +0000 UTC m=+119.714129644" watchObservedRunningTime="2025-12-08 17:44:22.817142997 +0000 UTC m=+119.718467091" Dec 08 17:44:22 crc kubenswrapper[5118]: I1208 17:44:22.839289 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-psjrr" podStartSLOduration=7.839261991 podStartE2EDuration="7.839261991s" podCreationTimestamp="2025-12-08 17:44:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:44:22.838372477 +0000 UTC m=+119.739696571" watchObservedRunningTime="2025-12-08 17:44:22.839261991 +0000 UTC m=+119.740586075" Dec 08 17:44:22 crc kubenswrapper[5118]: I1208 17:44:22.863286 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-n5vp7" Dec 08 17:44:22 crc kubenswrapper[5118]: I1208 17:44:22.866140 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:22 crc kubenswrapper[5118]: E1208 17:44:22.869980 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:23.369950518 +0000 UTC m=+120.271274612 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:22 crc kubenswrapper[5118]: I1208 17:44:22.906223 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-2cnx5" event={"ID":"f22fa87e-79cb-498c-a2ab-166d47fd70a5","Type":"ContainerStarted","Data":"024d5933c987537bac613a783f4706ba8e376729ecb77f514f90b867ea261e77"} Dec 08 17:44:22 crc kubenswrapper[5118]: I1208 17:44:22.933307 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-2cnx5" podStartSLOduration=99.933261265 podStartE2EDuration="1m39.933261265s" podCreationTimestamp="2025-12-08 17:42:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:44:22.924095075 +0000 UTC m=+119.825419169" watchObservedRunningTime="2025-12-08 17:44:22.933261265 +0000 UTC m=+119.834585349" Dec 08 17:44:22 crc kubenswrapper[5118]: I1208 17:44:22.938153 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-ztdrc" event={"ID":"9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e","Type":"ContainerStarted","Data":"803ad93dfa8700dbf09b3e6a4e33d63e186ef2c8cc3dfa4d900a01a2b041fbcf"} Dec 08 17:44:22 crc kubenswrapper[5118]: I1208 17:44:22.972354 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-s6hn4\" (UID: \"1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc\") " pod="openshift-image-registry/image-registry-66587d64c8-s6hn4" Dec 08 17:44:22 crc kubenswrapper[5118]: E1208 17:44:22.972793 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:23.472775683 +0000 UTC m=+120.374099767 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-s6hn4" (UID: "1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:23 crc kubenswrapper[5118]: I1208 17:44:23.012294 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-d8qsj" event={"ID":"9148080a-77e2-4847-840a-d67f837c8fbe","Type":"ContainerStarted","Data":"b4a48a1365cfcd6b53bb2c42cf7c5d2e8b6b40608e8d0b2fa635580b9e8994fb"} Dec 08 17:44:23 crc kubenswrapper[5118]: I1208 17:44:23.022503 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-d8qsj" Dec 08 17:44:23 crc kubenswrapper[5118]: I1208 17:44:23.041311 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-ggh59" event={"ID":"c987ac4d-5129-45aa-afe4-ab42b6907462","Type":"ContainerStarted","Data":"b52c97fa57f97ef5b8b477d5b8f5089543b95366b6e341c382fc9b227cf0f585"} Dec 08 17:44:23 crc kubenswrapper[5118]: I1208 17:44:23.047219 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-6gkgz" event={"ID":"dbad8204-9790-4f15-a74c-0149d19a4785","Type":"ContainerStarted","Data":"f3fcf3206fb45f5adf2397465091821c18987e27c5e4e0772d3e8121c8e90ea4"} Dec 08 17:44:23 crc kubenswrapper[5118]: I1208 17:44:23.048266 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-ggh59" Dec 08 17:44:23 crc kubenswrapper[5118]: I1208 17:44:23.057655 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-d8qsj" podStartSLOduration=100.057631187 podStartE2EDuration="1m40.057631187s" podCreationTimestamp="2025-12-08 17:42:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:44:23.046325909 +0000 UTC m=+119.947650013" watchObservedRunningTime="2025-12-08 17:44:23.057631187 +0000 UTC m=+119.958955281" Dec 08 17:44:23 crc kubenswrapper[5118]: I1208 17:44:23.069473 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-m5ltz" event={"ID":"0f90a7a2-721d-4929-a4fa-fd1d2019b4cd","Type":"ContainerStarted","Data":"60b7e4b2ed8bc3dbeb731487f0c7768dc39e32b8efcddd9c641fd780bed0d464"} Dec 08 17:44:23 crc kubenswrapper[5118]: I1208 17:44:23.069524 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-m5ltz" event={"ID":"0f90a7a2-721d-4929-a4fa-fd1d2019b4cd","Type":"ContainerStarted","Data":"ec4f637da38df73ec133f4025cc97b99242e32adbee5eb6a0c499617f008b5d0"} Dec 08 17:44:23 crc kubenswrapper[5118]: I1208 17:44:23.074344 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:23 crc kubenswrapper[5118]: I1208 17:44:23.078291 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-6gkgz" podStartSLOduration=100.078272191 podStartE2EDuration="1m40.078272191s" podCreationTimestamp="2025-12-08 17:42:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:44:23.075746302 +0000 UTC m=+119.977070396" watchObservedRunningTime="2025-12-08 17:44:23.078272191 +0000 UTC m=+119.979596285" Dec 08 17:44:23 crc kubenswrapper[5118]: E1208 17:44:23.078496 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:23.578461296 +0000 UTC m=+120.479785400 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:23 crc kubenswrapper[5118]: I1208 17:44:23.078642 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-s6hn4\" (UID: \"1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc\") " pod="openshift-image-registry/image-registry-66587d64c8-s6hn4" Dec 08 17:44:23 crc kubenswrapper[5118]: E1208 17:44:23.079166 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:23.579158235 +0000 UTC m=+120.480482329 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-s6hn4" (UID: "1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:23 crc kubenswrapper[5118]: I1208 17:44:23.084199 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-54w78" event={"ID":"e666ddb1-3625-4468-9d05-21215b5041c1","Type":"ContainerStarted","Data":"308be6025593230fb216a5f93f2da9a935a1db6d0eca3ff9e259dcbda51b69bc"} Dec 08 17:44:23 crc kubenswrapper[5118]: I1208 17:44:23.103869 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-bdhnb" event={"ID":"8ffbd09a-f2fa-4983-84c6-c6db6ccaac49","Type":"ContainerStarted","Data":"bf856c4ce39ea22a47c20ce3a893bb3e59b76de3a4a2519b97294ead8f9391d0"} Dec 08 17:44:23 crc kubenswrapper[5118]: I1208 17:44:23.104443 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-multus/cni-sysctl-allowlist-ds-bdhnb" Dec 08 17:44:23 crc kubenswrapper[5118]: I1208 17:44:23.136061 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-rdv9c" event={"ID":"3a9ac21c-f3fb-42c7-a5ce-096d015b8d3c","Type":"ContainerStarted","Data":"e7ecb656600cbc5308fa5bce9e248f0c68306350c1723fb015afb2083a0f476c"} Dec 08 17:44:23 crc kubenswrapper[5118]: I1208 17:44:23.141823 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-m5ltz" podStartSLOduration=100.141806583 podStartE2EDuration="1m40.141806583s" podCreationTimestamp="2025-12-08 17:42:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:44:23.103944011 +0000 UTC m=+120.005268105" watchObservedRunningTime="2025-12-08 17:44:23.141806583 +0000 UTC m=+120.043130677" Dec 08 17:44:23 crc kubenswrapper[5118]: I1208 17:44:23.185078 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:23 crc kubenswrapper[5118]: E1208 17:44:23.195678 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:23.695657103 +0000 UTC m=+120.596981187 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:23 crc kubenswrapper[5118]: I1208 17:44:23.216177 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-multus/cni-sysctl-allowlist-ds-bdhnb" Dec 08 17:44:23 crc kubenswrapper[5118]: I1208 17:44:23.221245 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-8596bd845d-rdv9c" podStartSLOduration=100.22121796 podStartE2EDuration="1m40.22121796s" podCreationTimestamp="2025-12-08 17:42:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:44:23.183009977 +0000 UTC m=+120.084334071" watchObservedRunningTime="2025-12-08 17:44:23.22121796 +0000 UTC m=+120.122542054" Dec 08 17:44:23 crc kubenswrapper[5118]: I1208 17:44:23.229505 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-487qx" event={"ID":"92b6ea75-6b68-454a-855f-958a2bf6150b","Type":"ContainerStarted","Data":"bd3a6027212e42b07c61c418b8f0a2d978182b56baf43f42d8212154c97b7745"} Dec 08 17:44:23 crc kubenswrapper[5118]: I1208 17:44:23.242941 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/cni-sysctl-allowlist-ds-bdhnb" podStartSLOduration=8.242911951 podStartE2EDuration="8.242911951s" podCreationTimestamp="2025-12-08 17:44:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:44:23.215106202 +0000 UTC m=+120.116430286" watchObservedRunningTime="2025-12-08 17:44:23.242911951 +0000 UTC m=+120.144236045" Dec 08 17:44:23 crc kubenswrapper[5118]: I1208 17:44:23.243634 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-74545575db-d69qv" event={"ID":"ada44265-dcab-408c-843e-e5c5a45aa138","Type":"ContainerStarted","Data":"b2aa9c4c57c65aee873074ad158e55d8d32401f835decd9dfa0fe59c2c0486f0"} Dec 08 17:44:23 crc kubenswrapper[5118]: I1208 17:44:23.258406 5118 generic.go:358] "Generic (PLEG): container finished" podID="695dd41c-159e-4e22-98e5-e27fdf4296fd" containerID="5db9e8626ffbde6eedabeaf1aaed331f322b8119c5b5e089cb13350beafbb6b7" exitCode=0 Dec 08 17:44:23 crc kubenswrapper[5118]: I1208 17:44:23.258537 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-8h8fl" event={"ID":"695dd41c-159e-4e22-98e5-e27fdf4296fd","Type":"ContainerDied","Data":"5db9e8626ffbde6eedabeaf1aaed331f322b8119c5b5e089cb13350beafbb6b7"} Dec 08 17:44:23 crc kubenswrapper[5118]: I1208 17:44:23.266939 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-6lgwk" event={"ID":"163e109f-c588-4057-a961-86bcca55948f","Type":"ContainerStarted","Data":"be7f0e3772a8defa61ea19e3200d8378c3e9b491f81e600603f9dc526770d6ad"} Dec 08 17:44:23 crc kubenswrapper[5118]: I1208 17:44:23.278969 5118 generic.go:358] "Generic (PLEG): container finished" podID="ceb6ea27-6be6-4eb2-8f56-d8ddfa3f0b0b" containerID="bc6ccfba88dcd3a4aaa437d93bd20326a7caee4a54a842dab3f5b3c384dfc2c7" exitCode=0 Dec 08 17:44:23 crc kubenswrapper[5118]: I1208 17:44:23.279048 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-v69x6" event={"ID":"ceb6ea27-6be6-4eb2-8f56-d8ddfa3f0b0b","Type":"ContainerDied","Data":"bc6ccfba88dcd3a4aaa437d93bd20326a7caee4a54a842dab3f5b3c384dfc2c7"} Dec 08 17:44:23 crc kubenswrapper[5118]: I1208 17:44:23.285975 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-74545575db-d69qv" podStartSLOduration=100.285957195 podStartE2EDuration="1m40.285957195s" podCreationTimestamp="2025-12-08 17:42:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:44:23.281333369 +0000 UTC m=+120.182657463" watchObservedRunningTime="2025-12-08 17:44:23.285957195 +0000 UTC m=+120.187281289" Dec 08 17:44:23 crc kubenswrapper[5118]: I1208 17:44:23.288626 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-s6hn4\" (UID: \"1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc\") " pod="openshift-image-registry/image-registry-66587d64c8-s6hn4" Dec 08 17:44:23 crc kubenswrapper[5118]: E1208 17:44:23.290633 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:23.790600632 +0000 UTC m=+120.691924716 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-s6hn4" (UID: "1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:23 crc kubenswrapper[5118]: I1208 17:44:23.318887 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-9b988" event={"ID":"6be72eaf-a179-4e2b-a12d-4b5dbb213183","Type":"ContainerStarted","Data":"832b305d1240cc0bcf2c0781c604f8dfbf6ed51b6a99cdfa47e89240a441c59f"} Dec 08 17:44:23 crc kubenswrapper[5118]: I1208 17:44:23.346313 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-67c89758df-79mps" event={"ID":"2e8b3e0b-d963-4522-9a08-71aee0979479","Type":"ContainerStarted","Data":"b9ab01aa001ad2c5784ddac95fc04ea32122d7e15a7294751601084b9dfa2398"} Dec 08 17:44:23 crc kubenswrapper[5118]: I1208 17:44:23.346366 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console-operator/console-operator-67c89758df-79mps" Dec 08 17:44:23 crc kubenswrapper[5118]: I1208 17:44:23.373642 5118 patch_prober.go:28] interesting pod/console-operator-67c89758df-79mps container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.29:8443/readyz\": dial tcp 10.217.0.29:8443: connect: connection refused" start-of-body= Dec 08 17:44:23 crc kubenswrapper[5118]: I1208 17:44:23.374073 5118 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-67c89758df-79mps" podUID="2e8b3e0b-d963-4522-9a08-71aee0979479" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.29:8443/readyz\": dial tcp 10.217.0.29:8443: connect: connection refused" Dec 08 17:44:23 crc kubenswrapper[5118]: I1208 17:44:23.380106 5118 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-rscz2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 17:44:23 crc kubenswrapper[5118]: [-]has-synced failed: reason withheld Dec 08 17:44:23 crc kubenswrapper[5118]: [+]process-running ok Dec 08 17:44:23 crc kubenswrapper[5118]: healthz check failed Dec 08 17:44:23 crc kubenswrapper[5118]: I1208 17:44:23.380177 5118 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-rscz2" podUID="fe85cb02-2d21-4fc3-92c1-6d060a006011" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 17:44:23 crc kubenswrapper[5118]: E1208 17:44:23.394623 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:23.894599349 +0000 UTC m=+120.795923443 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:23 crc kubenswrapper[5118]: I1208 17:44:23.391244 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:23 crc kubenswrapper[5118]: I1208 17:44:23.398604 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-s6hn4\" (UID: \"1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc\") " pod="openshift-image-registry/image-registry-66587d64c8-s6hn4" Dec 08 17:44:23 crc kubenswrapper[5118]: E1208 17:44:23.401775 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:23.901753524 +0000 UTC m=+120.803077618 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-s6hn4" (UID: "1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:23 crc kubenswrapper[5118]: I1208 17:44:23.407560 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-6lgwk" podStartSLOduration=100.407539831 podStartE2EDuration="1m40.407539831s" podCreationTimestamp="2025-12-08 17:42:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:44:23.362731539 +0000 UTC m=+120.264055633" watchObservedRunningTime="2025-12-08 17:44:23.407539831 +0000 UTC m=+120.308863925" Dec 08 17:44:23 crc kubenswrapper[5118]: I1208 17:44:23.413536 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-69b85846b6-k26tc" event={"ID":"1bd2df11-789d-4a3f-a7c4-2d6afbe38d0f","Type":"ContainerStarted","Data":"07295a2f7b5b3c65edd96320d163fc6e805c5e086dfc16c48b794802b29335a5"} Dec 08 17:44:23 crc kubenswrapper[5118]: I1208 17:44:23.482324 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lxwl6"] Dec 08 17:44:23 crc kubenswrapper[5118]: I1208 17:44:23.484353 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-67c89758df-79mps" podStartSLOduration=100.484342787 podStartE2EDuration="1m40.484342787s" podCreationTimestamp="2025-12-08 17:42:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:44:23.437945071 +0000 UTC m=+120.339269165" watchObservedRunningTime="2025-12-08 17:44:23.484342787 +0000 UTC m=+120.385666871" Dec 08 17:44:23 crc kubenswrapper[5118]: I1208 17:44:23.488899 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-bdhnb"] Dec 08 17:44:23 crc kubenswrapper[5118]: I1208 17:44:23.490848 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-69b85846b6-k26tc" podStartSLOduration=100.490837244 podStartE2EDuration="1m40.490837244s" podCreationTimestamp="2025-12-08 17:42:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:44:23.490113174 +0000 UTC m=+120.391437268" watchObservedRunningTime="2025-12-08 17:44:23.490837244 +0000 UTC m=+120.392161338" Dec 08 17:44:23 crc kubenswrapper[5118]: I1208 17:44:23.494500 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-psb45" event={"ID":"d549986a-81c9-4cd0-86b0-61e4b6700ddf","Type":"ContainerStarted","Data":"a662c9acfb07ed2f14bc4127b75d72d9153d9b81859175ec7a867188ba1254ba"} Dec 08 17:44:23 crc kubenswrapper[5118]: I1208 17:44:23.500295 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:23 crc kubenswrapper[5118]: E1208 17:44:23.502436 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:24.00242026 +0000 UTC m=+120.903744354 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:23 crc kubenswrapper[5118]: I1208 17:44:23.543549 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-rwgjl" event={"ID":"1cd09f9c-6a6f-438a-a982-082edc35a55c","Type":"ContainerStarted","Data":"64bd2f71b58b495b5e36993858116990db354604f0b9f30f1caf5765ff03b023"} Dec 08 17:44:23 crc kubenswrapper[5118]: I1208 17:44:23.596666 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-x7wvx" event={"ID":"39c08b26-3404-4ffd-a53a-c86f0c654db7","Type":"ContainerStarted","Data":"6c578d93a160808aee9ec8cf8fa8cb8a6c1983c8369e7e5a24a4e75acc104f6f"} Dec 08 17:44:23 crc kubenswrapper[5118]: I1208 17:44:23.610328 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-s6hn4\" (UID: \"1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc\") " pod="openshift-image-registry/image-registry-66587d64c8-s6hn4" Dec 08 17:44:23 crc kubenswrapper[5118]: I1208 17:44:23.615560 5118 patch_prober.go:28] interesting pod/downloads-747b44746d-x7wvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.35:8080/\": dial tcp 10.217.0.35:8080: connect: connection refused" start-of-body= Dec 08 17:44:23 crc kubenswrapper[5118]: I1208 17:44:23.615630 5118 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-x7wvx" podUID="39c08b26-3404-4ffd-a53a-c86f0c654db7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.35:8080/\": dial tcp 10.217.0.35:8080: connect: connection refused" Dec 08 17:44:23 crc kubenswrapper[5118]: E1208 17:44:23.626212 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:24.12597652 +0000 UTC m=+121.027300614 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-s6hn4" (UID: "1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:23 crc kubenswrapper[5118]: I1208 17:44:23.705256 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-5pp5q" event={"ID":"82728066-0204-4d71-acff-8779194a3e3c","Type":"ContainerStarted","Data":"64ae03bc760b16a97ac08d47ec6de45d8ad5b020a6e0d93d356a7a876bc17657"} Dec 08 17:44:23 crc kubenswrapper[5118]: I1208 17:44:23.738748 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:23 crc kubenswrapper[5118]: E1208 17:44:23.740003 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:24.2399862 +0000 UTC m=+121.141310294 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:23 crc kubenswrapper[5118]: I1208 17:44:23.741333 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-5scww" event={"ID":"4c48eb41-252c-441b-9506-329d9f6b0371","Type":"ContainerStarted","Data":"6ee4d78203172a5069c5cea99a73408be0a20a542e6bcd5718798cc60a82f4ad"} Dec 08 17:44:23 crc kubenswrapper[5118]: I1208 17:44:23.765155 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-c5tbq" event={"ID":"1125cbf4-59e9-464e-8305-d2fc133ae675","Type":"ContainerStarted","Data":"010f527922a84c158a643229ed5cb60f8dc1f0dddf8d575e30ead9ed434fdc86"} Dec 08 17:44:23 crc kubenswrapper[5118]: I1208 17:44:23.784407 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sb7gg"] Dec 08 17:44:23 crc kubenswrapper[5118]: I1208 17:44:23.804209 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-r22jf"] Dec 08 17:44:23 crc kubenswrapper[5118]: I1208 17:44:23.840947 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-s6hn4\" (UID: \"1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc\") " pod="openshift-image-registry/image-registry-66587d64c8-s6hn4" Dec 08 17:44:23 crc kubenswrapper[5118]: E1208 17:44:23.841529 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:24.341518129 +0000 UTC m=+121.242842223 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-s6hn4" (UID: "1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:23 crc kubenswrapper[5118]: I1208 17:44:23.849641 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-p88k2" event={"ID":"78316998-7ca1-4495-997b-bad16252fa84","Type":"ContainerStarted","Data":"b9fa76bf2d82fee218de0c964d26f3051df596dd036090c19df805a588de04e5"} Dec 08 17:44:23 crc kubenswrapper[5118]: I1208 17:44:23.849678 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-p88k2" event={"ID":"78316998-7ca1-4495-997b-bad16252fa84","Type":"ContainerStarted","Data":"f53011be2342d2e1df81bdc8b416956b013bd24a064c9fab44113e845132ed40"} Dec 08 17:44:23 crc kubenswrapper[5118]: I1208 17:44:23.855479 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-4kjg6" Dec 08 17:44:23 crc kubenswrapper[5118]: I1208 17:44:23.865245 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-85wdh" Dec 08 17:44:23 crc kubenswrapper[5118]: I1208 17:44:23.886533 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-psb45" podStartSLOduration=8.886516666 podStartE2EDuration="8.886516666s" podCreationTimestamp="2025-12-08 17:44:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:44:23.879822314 +0000 UTC m=+120.781146408" watchObservedRunningTime="2025-12-08 17:44:23.886516666 +0000 UTC m=+120.787840750" Dec 08 17:44:23 crc kubenswrapper[5118]: I1208 17:44:23.910731 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-5scww" podStartSLOduration=100.910711647 podStartE2EDuration="1m40.910711647s" podCreationTimestamp="2025-12-08 17:42:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:44:23.909451442 +0000 UTC m=+120.810775536" watchObservedRunningTime="2025-12-08 17:44:23.910711647 +0000 UTC m=+120.812035741" Dec 08 17:44:23 crc kubenswrapper[5118]: I1208 17:44:23.944444 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:23 crc kubenswrapper[5118]: E1208 17:44:23.945741 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:24.445723611 +0000 UTC m=+121.347047705 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:24 crc kubenswrapper[5118]: I1208 17:44:24.018009 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-p88k2" podStartSLOduration=101.017993293 podStartE2EDuration="1m41.017993293s" podCreationTimestamp="2025-12-08 17:42:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:44:23.986255527 +0000 UTC m=+120.887579621" watchObservedRunningTime="2025-12-08 17:44:24.017993293 +0000 UTC m=+120.919317397" Dec 08 17:44:24 crc kubenswrapper[5118]: I1208 17:44:24.025342 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-n5vp7"] Dec 08 17:44:24 crc kubenswrapper[5118]: I1208 17:44:24.048210 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-s6hn4\" (UID: \"1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc\") " pod="openshift-image-registry/image-registry-66587d64c8-s6hn4" Dec 08 17:44:24 crc kubenswrapper[5118]: E1208 17:44:24.048948 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:24.548934117 +0000 UTC m=+121.450258211 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-s6hn4" (UID: "1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:24 crc kubenswrapper[5118]: I1208 17:44:24.077097 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rvglb"] Dec 08 17:44:24 crc kubenswrapper[5118]: I1208 17:44:24.085456 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rvglb" Dec 08 17:44:24 crc kubenswrapper[5118]: I1208 17:44:24.088211 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Dec 08 17:44:24 crc kubenswrapper[5118]: I1208 17:44:24.093383 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rvglb"] Dec 08 17:44:24 crc kubenswrapper[5118]: I1208 17:44:24.154149 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:24 crc kubenswrapper[5118]: I1208 17:44:24.154297 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe467668-8954-4465-87ca-ef1d5f933d43-utilities\") pod \"redhat-marketplace-rvglb\" (UID: \"fe467668-8954-4465-87ca-ef1d5f933d43\") " pod="openshift-marketplace/redhat-marketplace-rvglb" Dec 08 17:44:24 crc kubenswrapper[5118]: I1208 17:44:24.154370 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe467668-8954-4465-87ca-ef1d5f933d43-catalog-content\") pod \"redhat-marketplace-rvglb\" (UID: \"fe467668-8954-4465-87ca-ef1d5f933d43\") " pod="openshift-marketplace/redhat-marketplace-rvglb" Dec 08 17:44:24 crc kubenswrapper[5118]: I1208 17:44:24.154395 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6xtn\" (UniqueName: \"kubernetes.io/projected/fe467668-8954-4465-87ca-ef1d5f933d43-kube-api-access-v6xtn\") pod \"redhat-marketplace-rvglb\" (UID: \"fe467668-8954-4465-87ca-ef1d5f933d43\") " pod="openshift-marketplace/redhat-marketplace-rvglb" Dec 08 17:44:24 crc kubenswrapper[5118]: E1208 17:44:24.154483 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:24.654469316 +0000 UTC m=+121.555793400 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:24 crc kubenswrapper[5118]: I1208 17:44:24.258613 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe467668-8954-4465-87ca-ef1d5f933d43-utilities\") pod \"redhat-marketplace-rvglb\" (UID: \"fe467668-8954-4465-87ca-ef1d5f933d43\") " pod="openshift-marketplace/redhat-marketplace-rvglb" Dec 08 17:44:24 crc kubenswrapper[5118]: I1208 17:44:24.258703 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe467668-8954-4465-87ca-ef1d5f933d43-catalog-content\") pod \"redhat-marketplace-rvglb\" (UID: \"fe467668-8954-4465-87ca-ef1d5f933d43\") " pod="openshift-marketplace/redhat-marketplace-rvglb" Dec 08 17:44:24 crc kubenswrapper[5118]: I1208 17:44:24.258743 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-v6xtn\" (UniqueName: \"kubernetes.io/projected/fe467668-8954-4465-87ca-ef1d5f933d43-kube-api-access-v6xtn\") pod \"redhat-marketplace-rvglb\" (UID: \"fe467668-8954-4465-87ca-ef1d5f933d43\") " pod="openshift-marketplace/redhat-marketplace-rvglb" Dec 08 17:44:24 crc kubenswrapper[5118]: I1208 17:44:24.258775 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-s6hn4\" (UID: \"1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc\") " pod="openshift-image-registry/image-registry-66587d64c8-s6hn4" Dec 08 17:44:24 crc kubenswrapper[5118]: E1208 17:44:24.259035 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:24.759023777 +0000 UTC m=+121.660347871 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-s6hn4" (UID: "1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:24 crc kubenswrapper[5118]: I1208 17:44:24.259853 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe467668-8954-4465-87ca-ef1d5f933d43-utilities\") pod \"redhat-marketplace-rvglb\" (UID: \"fe467668-8954-4465-87ca-ef1d5f933d43\") " pod="openshift-marketplace/redhat-marketplace-rvglb" Dec 08 17:44:24 crc kubenswrapper[5118]: I1208 17:44:24.260096 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe467668-8954-4465-87ca-ef1d5f933d43-catalog-content\") pod \"redhat-marketplace-rvglb\" (UID: \"fe467668-8954-4465-87ca-ef1d5f933d43\") " pod="openshift-marketplace/redhat-marketplace-rvglb" Dec 08 17:44:24 crc kubenswrapper[5118]: I1208 17:44:24.290313 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6xtn\" (UniqueName: \"kubernetes.io/projected/fe467668-8954-4465-87ca-ef1d5f933d43-kube-api-access-v6xtn\") pod \"redhat-marketplace-rvglb\" (UID: \"fe467668-8954-4465-87ca-ef1d5f933d43\") " pod="openshift-marketplace/redhat-marketplace-rvglb" Dec 08 17:44:24 crc kubenswrapper[5118]: I1208 17:44:24.359691 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:24 crc kubenswrapper[5118]: E1208 17:44:24.360413 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:24.860395772 +0000 UTC m=+121.761719866 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:24 crc kubenswrapper[5118]: I1208 17:44:24.362316 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rvglb" Dec 08 17:44:24 crc kubenswrapper[5118]: I1208 17:44:24.362442 5118 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-rscz2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 17:44:24 crc kubenswrapper[5118]: [-]has-synced failed: reason withheld Dec 08 17:44:24 crc kubenswrapper[5118]: [+]process-running ok Dec 08 17:44:24 crc kubenswrapper[5118]: healthz check failed Dec 08 17:44:24 crc kubenswrapper[5118]: I1208 17:44:24.363263 5118 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-rscz2" podUID="fe85cb02-2d21-4fc3-92c1-6d060a006011" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 17:44:24 crc kubenswrapper[5118]: I1208 17:44:24.462301 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-s6hn4\" (UID: \"1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc\") " pod="openshift-image-registry/image-registry-66587d64c8-s6hn4" Dec 08 17:44:24 crc kubenswrapper[5118]: E1208 17:44:24.462868 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:24.962850367 +0000 UTC m=+121.864174461 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-s6hn4" (UID: "1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:24 crc kubenswrapper[5118]: I1208 17:44:24.485759 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-6m6rs"] Dec 08 17:44:24 crc kubenswrapper[5118]: I1208 17:44:24.563504 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:24 crc kubenswrapper[5118]: E1208 17:44:24.563940 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:25.063923674 +0000 UTC m=+121.965247768 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:24 crc kubenswrapper[5118]: I1208 17:44:24.573085 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6m6rs"] Dec 08 17:44:24 crc kubenswrapper[5118]: I1208 17:44:24.573234 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6m6rs" Dec 08 17:44:24 crc kubenswrapper[5118]: I1208 17:44:24.593715 5118 ???:1] "http: TLS handshake error from 192.168.126.11:56656: no serving certificate available for the kubelet" Dec 08 17:44:24 crc kubenswrapper[5118]: I1208 17:44:24.664890 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-s6hn4\" (UID: \"1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc\") " pod="openshift-image-registry/image-registry-66587d64c8-s6hn4" Dec 08 17:44:24 crc kubenswrapper[5118]: E1208 17:44:24.665865 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:25.165852304 +0000 UTC m=+122.067176398 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-s6hn4" (UID: "1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:24 crc kubenswrapper[5118]: I1208 17:44:24.689491 5118 ???:1] "http: TLS handshake error from 192.168.126.11:56672: no serving certificate available for the kubelet" Dec 08 17:44:24 crc kubenswrapper[5118]: I1208 17:44:24.741511 5118 ???:1] "http: TLS handshake error from 192.168.126.11:56686: no serving certificate available for the kubelet" Dec 08 17:44:24 crc kubenswrapper[5118]: I1208 17:44:24.767463 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:24 crc kubenswrapper[5118]: E1208 17:44:24.767607 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:25.26758964 +0000 UTC m=+122.168913734 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:24 crc kubenswrapper[5118]: I1208 17:44:24.768007 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/caab7ab2-a04e-42fc-bd64-76c76ee3755d-utilities\") pod \"redhat-marketplace-6m6rs\" (UID: \"caab7ab2-a04e-42fc-bd64-76c76ee3755d\") " pod="openshift-marketplace/redhat-marketplace-6m6rs" Dec 08 17:44:24 crc kubenswrapper[5118]: I1208 17:44:24.768057 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-s6hn4\" (UID: \"1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc\") " pod="openshift-image-registry/image-registry-66587d64c8-s6hn4" Dec 08 17:44:24 crc kubenswrapper[5118]: I1208 17:44:24.768124 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mj9vb\" (UniqueName: \"kubernetes.io/projected/caab7ab2-a04e-42fc-bd64-76c76ee3755d-kube-api-access-mj9vb\") pod \"redhat-marketplace-6m6rs\" (UID: \"caab7ab2-a04e-42fc-bd64-76c76ee3755d\") " pod="openshift-marketplace/redhat-marketplace-6m6rs" Dec 08 17:44:24 crc kubenswrapper[5118]: I1208 17:44:24.768142 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/caab7ab2-a04e-42fc-bd64-76c76ee3755d-catalog-content\") pod \"redhat-marketplace-6m6rs\" (UID: \"caab7ab2-a04e-42fc-bd64-76c76ee3755d\") " pod="openshift-marketplace/redhat-marketplace-6m6rs" Dec 08 17:44:24 crc kubenswrapper[5118]: E1208 17:44:24.768407 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:25.268396752 +0000 UTC m=+122.169720836 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-s6hn4" (UID: "1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:24 crc kubenswrapper[5118]: I1208 17:44:24.851072 5118 ???:1] "http: TLS handshake error from 192.168.126.11:56700: no serving certificate available for the kubelet" Dec 08 17:44:24 crc kubenswrapper[5118]: I1208 17:44:24.868926 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:24 crc kubenswrapper[5118]: I1208 17:44:24.869147 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mj9vb\" (UniqueName: \"kubernetes.io/projected/caab7ab2-a04e-42fc-bd64-76c76ee3755d-kube-api-access-mj9vb\") pod \"redhat-marketplace-6m6rs\" (UID: \"caab7ab2-a04e-42fc-bd64-76c76ee3755d\") " pod="openshift-marketplace/redhat-marketplace-6m6rs" Dec 08 17:44:24 crc kubenswrapper[5118]: I1208 17:44:24.869176 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/caab7ab2-a04e-42fc-bd64-76c76ee3755d-catalog-content\") pod \"redhat-marketplace-6m6rs\" (UID: \"caab7ab2-a04e-42fc-bd64-76c76ee3755d\") " pod="openshift-marketplace/redhat-marketplace-6m6rs" Dec 08 17:44:24 crc kubenswrapper[5118]: I1208 17:44:24.869241 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/caab7ab2-a04e-42fc-bd64-76c76ee3755d-utilities\") pod \"redhat-marketplace-6m6rs\" (UID: \"caab7ab2-a04e-42fc-bd64-76c76ee3755d\") " pod="openshift-marketplace/redhat-marketplace-6m6rs" Dec 08 17:44:24 crc kubenswrapper[5118]: E1208 17:44:24.869802 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:25.369786447 +0000 UTC m=+122.271110541 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:24 crc kubenswrapper[5118]: I1208 17:44:24.870069 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/caab7ab2-a04e-42fc-bd64-76c76ee3755d-utilities\") pod \"redhat-marketplace-6m6rs\" (UID: \"caab7ab2-a04e-42fc-bd64-76c76ee3755d\") " pod="openshift-marketplace/redhat-marketplace-6m6rs" Dec 08 17:44:24 crc kubenswrapper[5118]: I1208 17:44:24.870300 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/caab7ab2-a04e-42fc-bd64-76c76ee3755d-catalog-content\") pod \"redhat-marketplace-6m6rs\" (UID: \"caab7ab2-a04e-42fc-bd64-76c76ee3755d\") " pod="openshift-marketplace/redhat-marketplace-6m6rs" Dec 08 17:44:24 crc kubenswrapper[5118]: I1208 17:44:24.904748 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mj9vb\" (UniqueName: \"kubernetes.io/projected/caab7ab2-a04e-42fc-bd64-76c76ee3755d-kube-api-access-mj9vb\") pod \"redhat-marketplace-6m6rs\" (UID: \"caab7ab2-a04e-42fc-bd64-76c76ee3755d\") " pod="openshift-marketplace/redhat-marketplace-6m6rs" Dec 08 17:44:24 crc kubenswrapper[5118]: I1208 17:44:24.936582 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-54w78" event={"ID":"e666ddb1-3625-4468-9d05-21215b5041c1","Type":"ContainerStarted","Data":"fdca4137bcd877b225bccd8cfbb08b01155f0d71ce01973d865f059d066e107d"} Dec 08 17:44:24 crc kubenswrapper[5118]: I1208 17:44:24.953585 5118 ???:1] "http: TLS handshake error from 192.168.126.11:56712: no serving certificate available for the kubelet" Dec 08 17:44:24 crc kubenswrapper[5118]: I1208 17:44:24.953715 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-487qx" event={"ID":"92b6ea75-6b68-454a-855f-958a2bf6150b","Type":"ContainerStarted","Data":"97ce02ccd9ff531d5ba0111842e01affabf3c614e9f70053e802339fc5e50f7f"} Dec 08 17:44:24 crc kubenswrapper[5118]: I1208 17:44:24.958818 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-8h8fl" event={"ID":"695dd41c-159e-4e22-98e5-e27fdf4296fd","Type":"ContainerStarted","Data":"b88be215f7917b6b5e9e3e8743719434426702b2cb445e70b7009f125cf2cb2d"} Dec 08 17:44:24 crc kubenswrapper[5118]: I1208 17:44:24.974239 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-s6hn4\" (UID: \"1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc\") " pod="openshift-image-registry/image-registry-66587d64c8-s6hn4" Dec 08 17:44:24 crc kubenswrapper[5118]: E1208 17:44:24.974584 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:25.474569616 +0000 UTC m=+122.375893720 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-s6hn4" (UID: "1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:24 crc kubenswrapper[5118]: I1208 17:44:24.979077 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-v69x6" event={"ID":"ceb6ea27-6be6-4eb2-8f56-d8ddfa3f0b0b","Type":"ContainerStarted","Data":"fa3bbc1311d87dbad0da4e21ed22056eaf63b7f3c679b1dae1cdfa33e5cd5f83"} Dec 08 17:44:24 crc kubenswrapper[5118]: I1208 17:44:24.979624 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-config-operator/openshift-config-operator-5777786469-v69x6" Dec 08 17:44:24 crc kubenswrapper[5118]: I1208 17:44:24.988199 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-9b988" event={"ID":"6be72eaf-a179-4e2b-a12d-4b5dbb213183","Type":"ContainerStarted","Data":"b3fb11cb5d467fecb0a5e647afffa30560ddb6e1ada0a7d8accd2f5aa3d227cb"} Dec 08 17:44:24 crc kubenswrapper[5118]: I1208 17:44:24.993493 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-54c688565-487qx" podStartSLOduration=101.993476231 podStartE2EDuration="1m41.993476231s" podCreationTimestamp="2025-12-08 17:42:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:44:24.992575017 +0000 UTC m=+121.893899121" watchObservedRunningTime="2025-12-08 17:44:24.993476231 +0000 UTC m=+121.894800325" Dec 08 17:44:24 crc kubenswrapper[5118]: I1208 17:44:24.995916 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-54w78" podStartSLOduration=101.995907207 podStartE2EDuration="1m41.995907207s" podCreationTimestamp="2025-12-08 17:42:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:44:24.963159644 +0000 UTC m=+121.864483738" watchObservedRunningTime="2025-12-08 17:44:24.995907207 +0000 UTC m=+121.897231301" Dec 08 17:44:25 crc kubenswrapper[5118]: I1208 17:44:25.012314 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-67c89758df-79mps" event={"ID":"2e8b3e0b-d963-4522-9a08-71aee0979479","Type":"ContainerStarted","Data":"7fac6350ad9edf8bb3a2a4a21218be28f86dcb5d0e910464d3c25eb69a8ffeb6"} Dec 08 17:44:25 crc kubenswrapper[5118]: I1208 17:44:25.014395 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-69b85846b6-k26tc" event={"ID":"1bd2df11-789d-4a3f-a7c4-2d6afbe38d0f","Type":"ContainerStarted","Data":"1a132896ab3d867a5c220b008cf16547ca1889b485fb68b2fda578e587ca5ce1"} Dec 08 17:44:25 crc kubenswrapper[5118]: I1208 17:44:25.023176 5118 generic.go:358] "Generic (PLEG): container finished" podID="8c05f773-74bd-433b-84ce-a7f5430d9b55" containerID="9f8a267f29d857e5a98610714fb9c1148b18b501a31c792893697428ba40718c" exitCode=0 Dec 08 17:44:25 crc kubenswrapper[5118]: I1208 17:44:25.023314 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n5vp7" event={"ID":"8c05f773-74bd-433b-84ce-a7f5430d9b55","Type":"ContainerDied","Data":"9f8a267f29d857e5a98610714fb9c1148b18b501a31c792893697428ba40718c"} Dec 08 17:44:25 crc kubenswrapper[5118]: I1208 17:44:25.023352 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n5vp7" event={"ID":"8c05f773-74bd-433b-84ce-a7f5430d9b55","Type":"ContainerStarted","Data":"ae661eb10cf40ee037c5bbb75003eb4cc6748efa6c166b48128d05415dc58f58"} Dec 08 17:44:25 crc kubenswrapper[5118]: I1208 17:44:25.026244 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-799b87ffcd-9b988" podStartSLOduration=102.026224334 podStartE2EDuration="1m42.026224334s" podCreationTimestamp="2025-12-08 17:42:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:44:25.017954639 +0000 UTC m=+121.919278733" watchObservedRunningTime="2025-12-08 17:44:25.026224334 +0000 UTC m=+121.927548418" Dec 08 17:44:25 crc kubenswrapper[5118]: I1208 17:44:25.029273 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rvglb"] Dec 08 17:44:25 crc kubenswrapper[5118]: I1208 17:44:25.038824 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-5777786469-v69x6" podStartSLOduration=102.038808217 podStartE2EDuration="1m42.038808217s" podCreationTimestamp="2025-12-08 17:42:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:44:25.038065287 +0000 UTC m=+121.939389401" watchObservedRunningTime="2025-12-08 17:44:25.038808217 +0000 UTC m=+121.940132301" Dec 08 17:44:25 crc kubenswrapper[5118]: I1208 17:44:25.043429 5118 generic.go:358] "Generic (PLEG): container finished" podID="742843af-c521-4d4a-beea-e6feae8140e1" containerID="767de89320d755dfb8a7c8272189759333c3065a02a02dd9f10f50995cf940ca" exitCode=0 Dec 08 17:44:25 crc kubenswrapper[5118]: I1208 17:44:25.045771 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29420250-qhrfp" event={"ID":"742843af-c521-4d4a-beea-e6feae8140e1","Type":"ContainerDied","Data":"767de89320d755dfb8a7c8272189759333c3065a02a02dd9f10f50995cf940ca"} Dec 08 17:44:25 crc kubenswrapper[5118]: I1208 17:44:25.058100 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6m6rs" Dec 08 17:44:25 crc kubenswrapper[5118]: I1208 17:44:25.078272 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:25 crc kubenswrapper[5118]: E1208 17:44:25.078626 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:25.578609664 +0000 UTC m=+122.479933758 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:25 crc kubenswrapper[5118]: I1208 17:44:25.098032 5118 generic.go:358] "Generic (PLEG): container finished" podID="e4f4fc3c-88d2-455a-a8d2-209388238c9a" containerID="4653db880fc6a5b47cdaa0ba5f369ef1c8ac02b34151805d4aed8008b3fa79d2" exitCode=0 Dec 08 17:44:25 crc kubenswrapper[5118]: I1208 17:44:25.099440 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sb7gg" event={"ID":"e4f4fc3c-88d2-455a-a8d2-209388238c9a","Type":"ContainerDied","Data":"4653db880fc6a5b47cdaa0ba5f369ef1c8ac02b34151805d4aed8008b3fa79d2"} Dec 08 17:44:25 crc kubenswrapper[5118]: I1208 17:44:25.099479 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sb7gg" event={"ID":"e4f4fc3c-88d2-455a-a8d2-209388238c9a","Type":"ContainerStarted","Data":"ef0290741bfadc050351726657ea5d1e90d89bb42d86f01f2b2081dd32e004ea"} Dec 08 17:44:25 crc kubenswrapper[5118]: I1208 17:44:25.101921 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Dec 08 17:44:25 crc kubenswrapper[5118]: I1208 17:44:25.134069 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-zfv6j"] Dec 08 17:44:25 crc kubenswrapper[5118]: I1208 17:44:25.135048 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 08 17:44:25 crc kubenswrapper[5118]: I1208 17:44:25.142195 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler\"/\"installer-sa-dockercfg-qpkss\"" Dec 08 17:44:25 crc kubenswrapper[5118]: I1208 17:44:25.142435 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler\"/\"kube-root-ca.crt\"" Dec 08 17:44:25 crc kubenswrapper[5118]: I1208 17:44:25.149188 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-dns/dns-default-c5tbq" Dec 08 17:44:25 crc kubenswrapper[5118]: I1208 17:44:25.149221 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-c5tbq" event={"ID":"1125cbf4-59e9-464e-8305-d2fc133ae675","Type":"ContainerStarted","Data":"b6359d32a9de8c556c62eccb2307e2e8e0d5c4b822ce16321de328664c908b19"} Dec 08 17:44:25 crc kubenswrapper[5118]: I1208 17:44:25.149246 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-c5tbq" event={"ID":"1125cbf4-59e9-464e-8305-d2fc133ae675","Type":"ContainerStarted","Data":"dccdf5722b0fa91835591eae0a6d16f018cf9ad94c5520b7d8d889b1b38cd6a7"} Dec 08 17:44:25 crc kubenswrapper[5118]: I1208 17:44:25.149259 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Dec 08 17:44:25 crc kubenswrapper[5118]: I1208 17:44:25.149273 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zfv6j"] Dec 08 17:44:25 crc kubenswrapper[5118]: I1208 17:44:25.149283 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-p88k2" event={"ID":"78316998-7ca1-4495-997b-bad16252fa84","Type":"ContainerStarted","Data":"b93dd1948c41350805e948688efc0ebad9b907daacbbb1aa8be183dd05b8a435"} Dec 08 17:44:25 crc kubenswrapper[5118]: I1208 17:44:25.150140 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zfv6j" Dec 08 17:44:25 crc kubenswrapper[5118]: I1208 17:44:25.152313 5118 generic.go:358] "Generic (PLEG): container finished" podID="cb8303fe-2019-44f4-a124-af174b28cc02" containerID="9c387ba0d120976d9e95c5e677248d0f768c20788e2e7bb48afee673c174f607" exitCode=0 Dec 08 17:44:25 crc kubenswrapper[5118]: I1208 17:44:25.152628 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r22jf" event={"ID":"cb8303fe-2019-44f4-a124-af174b28cc02","Type":"ContainerDied","Data":"9c387ba0d120976d9e95c5e677248d0f768c20788e2e7bb48afee673c174f607"} Dec 08 17:44:25 crc kubenswrapper[5118]: I1208 17:44:25.152747 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r22jf" event={"ID":"cb8303fe-2019-44f4-a124-af174b28cc02","Type":"ContainerStarted","Data":"ad18d5cd7629954fa392ecb15a908995ca8664574c76fffd455810a5c533b257"} Dec 08 17:44:25 crc kubenswrapper[5118]: I1208 17:44:25.155744 5118 ???:1] "http: TLS handshake error from 192.168.126.11:56714: no serving certificate available for the kubelet" Dec 08 17:44:25 crc kubenswrapper[5118]: I1208 17:44:25.178501 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Dec 08 17:44:25 crc kubenswrapper[5118]: I1208 17:44:25.182642 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-c5tbq" podStartSLOduration=10.18259767 podStartE2EDuration="10.18259767s" podCreationTimestamp="2025-12-08 17:44:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:44:25.17453942 +0000 UTC m=+122.075863504" watchObservedRunningTime="2025-12-08 17:44:25.18259767 +0000 UTC m=+122.083921764" Dec 08 17:44:25 crc kubenswrapper[5118]: I1208 17:44:25.195797 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-s6hn4\" (UID: \"1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc\") " pod="openshift-image-registry/image-registry-66587d64c8-s6hn4" Dec 08 17:44:25 crc kubenswrapper[5118]: I1208 17:44:25.199215 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-2pwhz" event={"ID":"0157c9d2-3779-46c8-9da9-1fffa52986a6","Type":"ContainerStarted","Data":"5b1e3527abd4d516ff1fe9a486073554a47f2b4008d99f1b7b5fd40cb318a6cc"} Dec 08 17:44:25 crc kubenswrapper[5118]: E1208 17:44:25.200304 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:25.700265112 +0000 UTC m=+122.601589206 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-s6hn4" (UID: "1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:25 crc kubenswrapper[5118]: I1208 17:44:25.206298 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-v9sxk" event={"ID":"f5c1e280-e9c9-4a30-bb13-023852fd940b","Type":"ContainerStarted","Data":"6a06e5c6f24591bc4180b727776c2132eb88bb771ea9513445aa6ba68cba2d22"} Dec 08 17:44:25 crc kubenswrapper[5118]: I1208 17:44:25.220350 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-2cnx5" event={"ID":"f22fa87e-79cb-498c-a2ab-166d47fd70a5","Type":"ContainerStarted","Data":"3e5bfabdc27f6c57a8bcc1158ada56ee504e8cafdd3bfc926d4d8e77c7f9d43c"} Dec 08 17:44:25 crc kubenswrapper[5118]: I1208 17:44:25.224442 5118 generic.go:358] "Generic (PLEG): container finished" podID="fe8486ce-b0ff-43e5-b2a4-3e1d81feeebf" containerID="7d5a57573a287b700fca389071c6e934a33bbae0a14200922458d1c8c760f5b8" exitCode=0 Dec 08 17:44:25 crc kubenswrapper[5118]: I1208 17:44:25.224503 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lxwl6" event={"ID":"fe8486ce-b0ff-43e5-b2a4-3e1d81feeebf","Type":"ContainerDied","Data":"7d5a57573a287b700fca389071c6e934a33bbae0a14200922458d1c8c760f5b8"} Dec 08 17:44:25 crc kubenswrapper[5118]: I1208 17:44:25.224521 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lxwl6" event={"ID":"fe8486ce-b0ff-43e5-b2a4-3e1d81feeebf","Type":"ContainerStarted","Data":"08a30309aab05f724b4d90b3610f7ad6b5bae8633f9e5f0e956fb4a55ca08d5c"} Dec 08 17:44:25 crc kubenswrapper[5118]: I1208 17:44:25.251610 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-ztdrc" event={"ID":"9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e","Type":"ContainerStarted","Data":"8226ae0bddef3b2ba00ae57f90dc81a0d0635b1a410c23d88f9acdb5e8682af6"} Dec 08 17:44:25 crc kubenswrapper[5118]: I1208 17:44:25.271893 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-authentication/oauth-openshift-66458b6674-ztdrc" Dec 08 17:44:25 crc kubenswrapper[5118]: I1208 17:44:25.273068 5118 patch_prober.go:28] interesting pod/downloads-747b44746d-x7wvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.35:8080/\": dial tcp 10.217.0.35:8080: connect: connection refused" start-of-body= Dec 08 17:44:25 crc kubenswrapper[5118]: I1208 17:44:25.273169 5118 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-x7wvx" podUID="39c08b26-3404-4ffd-a53a-c86f0c654db7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.35:8080/\": dial tcp 10.217.0.35:8080: connect: connection refused" Dec 08 17:44:25 crc kubenswrapper[5118]: I1208 17:44:25.303571 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:25 crc kubenswrapper[5118]: E1208 17:44:25.303681 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:25.803660132 +0000 UTC m=+122.704984226 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:25 crc kubenswrapper[5118]: I1208 17:44:25.303777 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nr9nr\" (UniqueName: \"kubernetes.io/projected/e2c92d64-3525-4675-bbe9-38bfe6dd4504-kube-api-access-nr9nr\") pod \"redhat-operators-zfv6j\" (UID: \"e2c92d64-3525-4675-bbe9-38bfe6dd4504\") " pod="openshift-marketplace/redhat-operators-zfv6j" Dec 08 17:44:25 crc kubenswrapper[5118]: I1208 17:44:25.303814 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2c92d64-3525-4675-bbe9-38bfe6dd4504-catalog-content\") pod \"redhat-operators-zfv6j\" (UID: \"e2c92d64-3525-4675-bbe9-38bfe6dd4504\") " pod="openshift-marketplace/redhat-operators-zfv6j" Dec 08 17:44:25 crc kubenswrapper[5118]: I1208 17:44:25.304188 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c683e0b8-bb8e-4012-80e0-a07cbd5b9cf6-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"c683e0b8-bb8e-4012-80e0-a07cbd5b9cf6\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 08 17:44:25 crc kubenswrapper[5118]: I1208 17:44:25.304246 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c683e0b8-bb8e-4012-80e0-a07cbd5b9cf6-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"c683e0b8-bb8e-4012-80e0-a07cbd5b9cf6\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 08 17:44:25 crc kubenswrapper[5118]: I1208 17:44:25.304444 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-s6hn4\" (UID: \"1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc\") " pod="openshift-image-registry/image-registry-66587d64c8-s6hn4" Dec 08 17:44:25 crc kubenswrapper[5118]: E1208 17:44:25.310603 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:25.81058258 +0000 UTC m=+122.711906854 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-s6hn4" (UID: "1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:25 crc kubenswrapper[5118]: I1208 17:44:25.311924 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2c92d64-3525-4675-bbe9-38bfe6dd4504-utilities\") pod \"redhat-operators-zfv6j\" (UID: \"e2c92d64-3525-4675-bbe9-38bfe6dd4504\") " pod="openshift-marketplace/redhat-operators-zfv6j" Dec 08 17:44:25 crc kubenswrapper[5118]: I1208 17:44:25.327420 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-66458b6674-ztdrc" Dec 08 17:44:25 crc kubenswrapper[5118]: I1208 17:44:25.337559 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-67c89758df-79mps" Dec 08 17:44:25 crc kubenswrapper[5118]: I1208 17:44:25.361839 5118 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-rscz2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 17:44:25 crc kubenswrapper[5118]: [-]has-synced failed: reason withheld Dec 08 17:44:25 crc kubenswrapper[5118]: [+]process-running ok Dec 08 17:44:25 crc kubenswrapper[5118]: healthz check failed Dec 08 17:44:25 crc kubenswrapper[5118]: I1208 17:44:25.361946 5118 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-rscz2" podUID="fe85cb02-2d21-4fc3-92c1-6d060a006011" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 17:44:25 crc kubenswrapper[5118]: I1208 17:44:25.366159 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-2pwhz" podStartSLOduration=102.366148586 podStartE2EDuration="1m42.366148586s" podCreationTimestamp="2025-12-08 17:42:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:44:25.364387279 +0000 UTC m=+122.265711373" watchObservedRunningTime="2025-12-08 17:44:25.366148586 +0000 UTC m=+122.267472680" Dec 08 17:44:25 crc kubenswrapper[5118]: I1208 17:44:25.414248 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:25 crc kubenswrapper[5118]: I1208 17:44:25.414643 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-nr9nr\" (UniqueName: \"kubernetes.io/projected/e2c92d64-3525-4675-bbe9-38bfe6dd4504-kube-api-access-nr9nr\") pod \"redhat-operators-zfv6j\" (UID: \"e2c92d64-3525-4675-bbe9-38bfe6dd4504\") " pod="openshift-marketplace/redhat-operators-zfv6j" Dec 08 17:44:25 crc kubenswrapper[5118]: I1208 17:44:25.414689 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2c92d64-3525-4675-bbe9-38bfe6dd4504-catalog-content\") pod \"redhat-operators-zfv6j\" (UID: \"e2c92d64-3525-4675-bbe9-38bfe6dd4504\") " pod="openshift-marketplace/redhat-operators-zfv6j" Dec 08 17:44:25 crc kubenswrapper[5118]: I1208 17:44:25.414750 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c683e0b8-bb8e-4012-80e0-a07cbd5b9cf6-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"c683e0b8-bb8e-4012-80e0-a07cbd5b9cf6\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 08 17:44:25 crc kubenswrapper[5118]: I1208 17:44:25.414773 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c683e0b8-bb8e-4012-80e0-a07cbd5b9cf6-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"c683e0b8-bb8e-4012-80e0-a07cbd5b9cf6\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 08 17:44:25 crc kubenswrapper[5118]: I1208 17:44:25.414873 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2c92d64-3525-4675-bbe9-38bfe6dd4504-utilities\") pod \"redhat-operators-zfv6j\" (UID: \"e2c92d64-3525-4675-bbe9-38bfe6dd4504\") " pod="openshift-marketplace/redhat-operators-zfv6j" Dec 08 17:44:25 crc kubenswrapper[5118]: I1208 17:44:25.415443 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2c92d64-3525-4675-bbe9-38bfe6dd4504-utilities\") pod \"redhat-operators-zfv6j\" (UID: \"e2c92d64-3525-4675-bbe9-38bfe6dd4504\") " pod="openshift-marketplace/redhat-operators-zfv6j" Dec 08 17:44:25 crc kubenswrapper[5118]: E1208 17:44:25.415534 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:25.915513623 +0000 UTC m=+122.816837717 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:25 crc kubenswrapper[5118]: I1208 17:44:25.416173 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2c92d64-3525-4675-bbe9-38bfe6dd4504-catalog-content\") pod \"redhat-operators-zfv6j\" (UID: \"e2c92d64-3525-4675-bbe9-38bfe6dd4504\") " pod="openshift-marketplace/redhat-operators-zfv6j" Dec 08 17:44:25 crc kubenswrapper[5118]: I1208 17:44:25.416262 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c683e0b8-bb8e-4012-80e0-a07cbd5b9cf6-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"c683e0b8-bb8e-4012-80e0-a07cbd5b9cf6\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 08 17:44:25 crc kubenswrapper[5118]: I1208 17:44:25.430459 5118 ???:1] "http: TLS handshake error from 192.168.126.11:56720: no serving certificate available for the kubelet" Dec 08 17:44:25 crc kubenswrapper[5118]: I1208 17:44:25.465420 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-nr9nr\" (UniqueName: \"kubernetes.io/projected/e2c92d64-3525-4675-bbe9-38bfe6dd4504-kube-api-access-nr9nr\") pod \"redhat-operators-zfv6j\" (UID: \"e2c92d64-3525-4675-bbe9-38bfe6dd4504\") " pod="openshift-marketplace/redhat-operators-zfv6j" Dec 08 17:44:25 crc kubenswrapper[5118]: I1208 17:44:25.475519 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c683e0b8-bb8e-4012-80e0-a07cbd5b9cf6-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"c683e0b8-bb8e-4012-80e0-a07cbd5b9cf6\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 08 17:44:25 crc kubenswrapper[5118]: I1208 17:44:25.492123 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zfv6j" Dec 08 17:44:25 crc kubenswrapper[5118]: I1208 17:44:25.516686 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-s6hn4\" (UID: \"1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc\") " pod="openshift-image-registry/image-registry-66587d64c8-s6hn4" Dec 08 17:44:25 crc kubenswrapper[5118]: E1208 17:44:25.517074 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:26.017063153 +0000 UTC m=+122.918387247 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-s6hn4" (UID: "1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:25 crc kubenswrapper[5118]: I1208 17:44:25.523732 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-w7jrs"] Dec 08 17:44:25 crc kubenswrapper[5118]: I1208 17:44:25.538820 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-66458b6674-ztdrc" podStartSLOduration=102.538798526 podStartE2EDuration="1m42.538798526s" podCreationTimestamp="2025-12-08 17:42:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:44:25.516551609 +0000 UTC m=+122.417875703" watchObservedRunningTime="2025-12-08 17:44:25.538798526 +0000 UTC m=+122.440122630" Dec 08 17:44:25 crc kubenswrapper[5118]: I1208 17:44:25.552932 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-69db94689b-v9sxk" podStartSLOduration=102.552916841 podStartE2EDuration="1m42.552916841s" podCreationTimestamp="2025-12-08 17:42:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:44:25.543311409 +0000 UTC m=+122.444635503" watchObservedRunningTime="2025-12-08 17:44:25.552916841 +0000 UTC m=+122.454240935" Dec 08 17:44:25 crc kubenswrapper[5118]: I1208 17:44:25.560071 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-w7jrs"] Dec 08 17:44:25 crc kubenswrapper[5118]: I1208 17:44:25.560188 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:44:25 crc kubenswrapper[5118]: I1208 17:44:25.561104 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w7jrs" Dec 08 17:44:25 crc kubenswrapper[5118]: I1208 17:44:25.565689 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6m6rs"] Dec 08 17:44:25 crc kubenswrapper[5118]: I1208 17:44:25.619541 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:25 crc kubenswrapper[5118]: I1208 17:44:25.619813 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba520484-b334-4e08-8f1a-5eb554b62dc4-utilities\") pod \"redhat-operators-w7jrs\" (UID: \"ba520484-b334-4e08-8f1a-5eb554b62dc4\") " pod="openshift-marketplace/redhat-operators-w7jrs" Dec 08 17:44:25 crc kubenswrapper[5118]: I1208 17:44:25.619885 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba520484-b334-4e08-8f1a-5eb554b62dc4-catalog-content\") pod \"redhat-operators-w7jrs\" (UID: \"ba520484-b334-4e08-8f1a-5eb554b62dc4\") " pod="openshift-marketplace/redhat-operators-w7jrs" Dec 08 17:44:25 crc kubenswrapper[5118]: I1208 17:44:25.619907 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hb86b\" (UniqueName: \"kubernetes.io/projected/ba520484-b334-4e08-8f1a-5eb554b62dc4-kube-api-access-hb86b\") pod \"redhat-operators-w7jrs\" (UID: \"ba520484-b334-4e08-8f1a-5eb554b62dc4\") " pod="openshift-marketplace/redhat-operators-w7jrs" Dec 08 17:44:25 crc kubenswrapper[5118]: E1208 17:44:25.619999 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:26.11998343 +0000 UTC m=+123.021307524 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:25 crc kubenswrapper[5118]: I1208 17:44:25.720813 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba520484-b334-4e08-8f1a-5eb554b62dc4-catalog-content\") pod \"redhat-operators-w7jrs\" (UID: \"ba520484-b334-4e08-8f1a-5eb554b62dc4\") " pod="openshift-marketplace/redhat-operators-w7jrs" Dec 08 17:44:25 crc kubenswrapper[5118]: I1208 17:44:25.721238 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hb86b\" (UniqueName: \"kubernetes.io/projected/ba520484-b334-4e08-8f1a-5eb554b62dc4-kube-api-access-hb86b\") pod \"redhat-operators-w7jrs\" (UID: \"ba520484-b334-4e08-8f1a-5eb554b62dc4\") " pod="openshift-marketplace/redhat-operators-w7jrs" Dec 08 17:44:25 crc kubenswrapper[5118]: I1208 17:44:25.721297 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-s6hn4\" (UID: \"1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc\") " pod="openshift-image-registry/image-registry-66587d64c8-s6hn4" Dec 08 17:44:25 crc kubenswrapper[5118]: I1208 17:44:25.721359 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba520484-b334-4e08-8f1a-5eb554b62dc4-utilities\") pod \"redhat-operators-w7jrs\" (UID: \"ba520484-b334-4e08-8f1a-5eb554b62dc4\") " pod="openshift-marketplace/redhat-operators-w7jrs" Dec 08 17:44:25 crc kubenswrapper[5118]: I1208 17:44:25.721693 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba520484-b334-4e08-8f1a-5eb554b62dc4-utilities\") pod \"redhat-operators-w7jrs\" (UID: \"ba520484-b334-4e08-8f1a-5eb554b62dc4\") " pod="openshift-marketplace/redhat-operators-w7jrs" Dec 08 17:44:25 crc kubenswrapper[5118]: I1208 17:44:25.722026 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba520484-b334-4e08-8f1a-5eb554b62dc4-catalog-content\") pod \"redhat-operators-w7jrs\" (UID: \"ba520484-b334-4e08-8f1a-5eb554b62dc4\") " pod="openshift-marketplace/redhat-operators-w7jrs" Dec 08 17:44:25 crc kubenswrapper[5118]: E1208 17:44:25.722247 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:26.222235079 +0000 UTC m=+123.123559163 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-s6hn4" (UID: "1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:25 crc kubenswrapper[5118]: I1208 17:44:25.754113 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hb86b\" (UniqueName: \"kubernetes.io/projected/ba520484-b334-4e08-8f1a-5eb554b62dc4-kube-api-access-hb86b\") pod \"redhat-operators-w7jrs\" (UID: \"ba520484-b334-4e08-8f1a-5eb554b62dc4\") " pod="openshift-marketplace/redhat-operators-w7jrs" Dec 08 17:44:25 crc kubenswrapper[5118]: I1208 17:44:25.771800 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 08 17:44:25 crc kubenswrapper[5118]: I1208 17:44:25.803236 5118 ???:1] "http: TLS handshake error from 192.168.126.11:53954: no serving certificate available for the kubelet" Dec 08 17:44:25 crc kubenswrapper[5118]: I1208 17:44:25.822485 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:25 crc kubenswrapper[5118]: E1208 17:44:25.822943 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:26.322927766 +0000 UTC m=+123.224251860 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:25 crc kubenswrapper[5118]: I1208 17:44:25.857896 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zfv6j"] Dec 08 17:44:25 crc kubenswrapper[5118]: I1208 17:44:25.900086 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w7jrs" Dec 08 17:44:25 crc kubenswrapper[5118]: I1208 17:44:25.924644 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-s6hn4\" (UID: \"1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc\") " pod="openshift-image-registry/image-registry-66587d64c8-s6hn4" Dec 08 17:44:25 crc kubenswrapper[5118]: E1208 17:44:25.925041 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:26.425029171 +0000 UTC m=+123.326353265 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-s6hn4" (UID: "1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:26 crc kubenswrapper[5118]: I1208 17:44:26.028341 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:26 crc kubenswrapper[5118]: E1208 17:44:26.028703 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:26.528687339 +0000 UTC m=+123.430011433 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:26 crc kubenswrapper[5118]: I1208 17:44:26.132527 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-s6hn4\" (UID: \"1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc\") " pod="openshift-image-registry/image-registry-66587d64c8-s6hn4" Dec 08 17:44:26 crc kubenswrapper[5118]: E1208 17:44:26.133337 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:26.633323982 +0000 UTC m=+123.534648076 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-s6hn4" (UID: "1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:26 crc kubenswrapper[5118]: I1208 17:44:26.154715 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Dec 08 17:44:26 crc kubenswrapper[5118]: I1208 17:44:26.235664 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:26 crc kubenswrapper[5118]: E1208 17:44:26.239160 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:26.739134299 +0000 UTC m=+123.640458393 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:26 crc kubenswrapper[5118]: I1208 17:44:26.239238 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-s6hn4\" (UID: \"1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc\") " pod="openshift-image-registry/image-registry-66587d64c8-s6hn4" Dec 08 17:44:26 crc kubenswrapper[5118]: E1208 17:44:26.239910 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:26.739865168 +0000 UTC m=+123.641189262 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-s6hn4" (UID: "1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:26 crc kubenswrapper[5118]: I1208 17:44:26.282305 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"c683e0b8-bb8e-4012-80e0-a07cbd5b9cf6","Type":"ContainerStarted","Data":"1527bd07152c4241e773659ffece99cf6e3c5940dd78184bc260f9235526b2d0"} Dec 08 17:44:26 crc kubenswrapper[5118]: I1208 17:44:26.308703 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-w7jrs"] Dec 08 17:44:26 crc kubenswrapper[5118]: I1208 17:44:26.315366 5118 generic.go:358] "Generic (PLEG): container finished" podID="fe467668-8954-4465-87ca-ef1d5f933d43" containerID="894a57bd149b073f0fef79ee9ec030db9a8c6c0ce09c4dcb3b54776522864d14" exitCode=0 Dec 08 17:44:26 crc kubenswrapper[5118]: I1208 17:44:26.316682 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rvglb" event={"ID":"fe467668-8954-4465-87ca-ef1d5f933d43","Type":"ContainerDied","Data":"894a57bd149b073f0fef79ee9ec030db9a8c6c0ce09c4dcb3b54776522864d14"} Dec 08 17:44:26 crc kubenswrapper[5118]: I1208 17:44:26.316722 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rvglb" event={"ID":"fe467668-8954-4465-87ca-ef1d5f933d43","Type":"ContainerStarted","Data":"300fcfe62cb2a2236d7576185a01858472eaf4d7b3901f788ba4cb7d1721d434"} Dec 08 17:44:26 crc kubenswrapper[5118]: I1208 17:44:26.341600 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:26 crc kubenswrapper[5118]: E1208 17:44:26.343078 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:26.843060924 +0000 UTC m=+123.744385018 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:26 crc kubenswrapper[5118]: I1208 17:44:26.365999 5118 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-rscz2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 17:44:26 crc kubenswrapper[5118]: [-]has-synced failed: reason withheld Dec 08 17:44:26 crc kubenswrapper[5118]: [+]process-running ok Dec 08 17:44:26 crc kubenswrapper[5118]: healthz check failed Dec 08 17:44:26 crc kubenswrapper[5118]: I1208 17:44:26.366078 5118 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-rscz2" podUID="fe85cb02-2d21-4fc3-92c1-6d060a006011" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 17:44:26 crc kubenswrapper[5118]: I1208 17:44:26.370760 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-8h8fl" event={"ID":"695dd41c-159e-4e22-98e5-e27fdf4296fd","Type":"ContainerStarted","Data":"57c61945f8a045280e80abf054c57b1455df7003381b85e0f82a941c27e5c047"} Dec 08 17:44:26 crc kubenswrapper[5118]: I1208 17:44:26.393194 5118 generic.go:358] "Generic (PLEG): container finished" podID="e2c92d64-3525-4675-bbe9-38bfe6dd4504" containerID="920e7485278ee475bff410e78c49fb30248bc283ba910377451d2fc403b1dc85" exitCode=0 Dec 08 17:44:26 crc kubenswrapper[5118]: I1208 17:44:26.393323 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zfv6j" event={"ID":"e2c92d64-3525-4675-bbe9-38bfe6dd4504","Type":"ContainerDied","Data":"920e7485278ee475bff410e78c49fb30248bc283ba910377451d2fc403b1dc85"} Dec 08 17:44:26 crc kubenswrapper[5118]: I1208 17:44:26.393359 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zfv6j" event={"ID":"e2c92d64-3525-4675-bbe9-38bfe6dd4504","Type":"ContainerStarted","Data":"bad7cc15753758580e7b5d15966ebb1082d0a9a66fb5c9a65077ce2b2db411b6"} Dec 08 17:44:26 crc kubenswrapper[5118]: I1208 17:44:26.435057 5118 generic.go:358] "Generic (PLEG): container finished" podID="caab7ab2-a04e-42fc-bd64-76c76ee3755d" containerID="1c66b13047e75f1bad78004632e6bc7e6cb083e8b4f6fd5fe2c2e871b4d6d8f3" exitCode=0 Dec 08 17:44:26 crc kubenswrapper[5118]: I1208 17:44:26.435964 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-9ddfb9f55-8h8fl" podStartSLOduration=103.435938717 podStartE2EDuration="1m43.435938717s" podCreationTimestamp="2025-12-08 17:42:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:44:26.405574478 +0000 UTC m=+123.306898572" watchObservedRunningTime="2025-12-08 17:44:26.435938717 +0000 UTC m=+123.337262811" Dec 08 17:44:26 crc kubenswrapper[5118]: I1208 17:44:26.437107 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6m6rs" event={"ID":"caab7ab2-a04e-42fc-bd64-76c76ee3755d","Type":"ContainerDied","Data":"1c66b13047e75f1bad78004632e6bc7e6cb083e8b4f6fd5fe2c2e871b4d6d8f3"} Dec 08 17:44:26 crc kubenswrapper[5118]: I1208 17:44:26.437155 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6m6rs" event={"ID":"caab7ab2-a04e-42fc-bd64-76c76ee3755d","Type":"ContainerStarted","Data":"bebe1f0da9278f62d8caef7874fc35428010d09942af1719c37fac3e6c4e8b5b"} Dec 08 17:44:26 crc kubenswrapper[5118]: I1208 17:44:26.438057 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-multus/cni-sysctl-allowlist-ds-bdhnb" podUID="8ffbd09a-f2fa-4983-84c6-c6db6ccaac49" containerName="kube-multus-additional-cni-plugins" containerID="cri-o://bf856c4ce39ea22a47c20ce3a893bb3e59b76de3a4a2519b97294ead8f9391d0" gracePeriod=30 Dec 08 17:44:26 crc kubenswrapper[5118]: I1208 17:44:26.446002 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-s6hn4\" (UID: \"1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc\") " pod="openshift-image-registry/image-registry-66587d64c8-s6hn4" Dec 08 17:44:26 crc kubenswrapper[5118]: E1208 17:44:26.448163 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:26.948149021 +0000 UTC m=+123.849473115 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-s6hn4" (UID: "1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:26 crc kubenswrapper[5118]: I1208 17:44:26.488159 5118 ???:1] "http: TLS handshake error from 192.168.126.11:53962: no serving certificate available for the kubelet" Dec 08 17:44:26 crc kubenswrapper[5118]: I1208 17:44:26.547702 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:26 crc kubenswrapper[5118]: E1208 17:44:26.547844 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:27.047816989 +0000 UTC m=+123.949141083 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:26 crc kubenswrapper[5118]: I1208 17:44:26.548502 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-s6hn4\" (UID: \"1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc\") " pod="openshift-image-registry/image-registry-66587d64c8-s6hn4" Dec 08 17:44:26 crc kubenswrapper[5118]: E1208 17:44:26.554265 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:27.054244014 +0000 UTC m=+123.955568108 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-s6hn4" (UID: "1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:26 crc kubenswrapper[5118]: I1208 17:44:26.649964 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:26 crc kubenswrapper[5118]: E1208 17:44:26.650158 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:27.15012538 +0000 UTC m=+124.051449474 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:26 crc kubenswrapper[5118]: I1208 17:44:26.650614 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-s6hn4\" (UID: \"1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc\") " pod="openshift-image-registry/image-registry-66587d64c8-s6hn4" Dec 08 17:44:26 crc kubenswrapper[5118]: E1208 17:44:26.650944 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:27.150928301 +0000 UTC m=+124.052252385 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-s6hn4" (UID: "1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:26 crc kubenswrapper[5118]: I1208 17:44:26.696842 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29420250-qhrfp" Dec 08 17:44:26 crc kubenswrapper[5118]: I1208 17:44:26.752194 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tcp6m\" (UniqueName: \"kubernetes.io/projected/742843af-c521-4d4a-beea-e6feae8140e1-kube-api-access-tcp6m\") pod \"742843af-c521-4d4a-beea-e6feae8140e1\" (UID: \"742843af-c521-4d4a-beea-e6feae8140e1\") " Dec 08 17:44:26 crc kubenswrapper[5118]: I1208 17:44:26.752350 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:26 crc kubenswrapper[5118]: I1208 17:44:26.752506 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/742843af-c521-4d4a-beea-e6feae8140e1-config-volume\") pod \"742843af-c521-4d4a-beea-e6feae8140e1\" (UID: \"742843af-c521-4d4a-beea-e6feae8140e1\") " Dec 08 17:44:26 crc kubenswrapper[5118]: I1208 17:44:26.752576 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/742843af-c521-4d4a-beea-e6feae8140e1-secret-volume\") pod \"742843af-c521-4d4a-beea-e6feae8140e1\" (UID: \"742843af-c521-4d4a-beea-e6feae8140e1\") " Dec 08 17:44:26 crc kubenswrapper[5118]: E1208 17:44:26.752650 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:27.252621695 +0000 UTC m=+124.153945789 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:26 crc kubenswrapper[5118]: I1208 17:44:26.752990 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-s6hn4\" (UID: \"1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc\") " pod="openshift-image-registry/image-registry-66587d64c8-s6hn4" Dec 08 17:44:26 crc kubenswrapper[5118]: I1208 17:44:26.753335 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/742843af-c521-4d4a-beea-e6feae8140e1-config-volume" (OuterVolumeSpecName: "config-volume") pod "742843af-c521-4d4a-beea-e6feae8140e1" (UID: "742843af-c521-4d4a-beea-e6feae8140e1"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:44:26 crc kubenswrapper[5118]: E1208 17:44:26.753549 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:27.253540961 +0000 UTC m=+124.154865055 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-s6hn4" (UID: "1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:26 crc kubenswrapper[5118]: I1208 17:44:26.759124 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/742843af-c521-4d4a-beea-e6feae8140e1-kube-api-access-tcp6m" (OuterVolumeSpecName: "kube-api-access-tcp6m") pod "742843af-c521-4d4a-beea-e6feae8140e1" (UID: "742843af-c521-4d4a-beea-e6feae8140e1"). InnerVolumeSpecName "kube-api-access-tcp6m". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:44:26 crc kubenswrapper[5118]: I1208 17:44:26.777737 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/742843af-c521-4d4a-beea-e6feae8140e1-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "742843af-c521-4d4a-beea-e6feae8140e1" (UID: "742843af-c521-4d4a-beea-e6feae8140e1"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:44:26 crc kubenswrapper[5118]: I1208 17:44:26.854259 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:26 crc kubenswrapper[5118]: I1208 17:44:26.854527 5118 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/742843af-c521-4d4a-beea-e6feae8140e1-config-volume\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:26 crc kubenswrapper[5118]: I1208 17:44:26.854540 5118 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/742843af-c521-4d4a-beea-e6feae8140e1-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:26 crc kubenswrapper[5118]: I1208 17:44:26.854549 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tcp6m\" (UniqueName: \"kubernetes.io/projected/742843af-c521-4d4a-beea-e6feae8140e1-kube-api-access-tcp6m\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:26 crc kubenswrapper[5118]: E1208 17:44:26.854654 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:27.354636708 +0000 UTC m=+124.255960802 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:26 crc kubenswrapper[5118]: I1208 17:44:26.956493 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-s6hn4\" (UID: \"1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc\") " pod="openshift-image-registry/image-registry-66587d64c8-s6hn4" Dec 08 17:44:26 crc kubenswrapper[5118]: E1208 17:44:26.956784 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:27.456771734 +0000 UTC m=+124.358095828 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-s6hn4" (UID: "1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:27 crc kubenswrapper[5118]: I1208 17:44:27.058151 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:27 crc kubenswrapper[5118]: E1208 17:44:27.058356 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:27.558324763 +0000 UTC m=+124.459648857 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:27 crc kubenswrapper[5118]: I1208 17:44:27.058564 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-s6hn4\" (UID: \"1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc\") " pod="openshift-image-registry/image-registry-66587d64c8-s6hn4" Dec 08 17:44:27 crc kubenswrapper[5118]: E1208 17:44:27.059062 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:27.559045854 +0000 UTC m=+124.460369948 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-s6hn4" (UID: "1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:27 crc kubenswrapper[5118]: I1208 17:44:27.161417 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:27 crc kubenswrapper[5118]: E1208 17:44:27.161659 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:27.661627301 +0000 UTC m=+124.562951405 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:27 crc kubenswrapper[5118]: I1208 17:44:27.162184 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-s6hn4\" (UID: \"1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc\") " pod="openshift-image-registry/image-registry-66587d64c8-s6hn4" Dec 08 17:44:27 crc kubenswrapper[5118]: E1208 17:44:27.162602 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:27.662592188 +0000 UTC m=+124.563916282 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-s6hn4" (UID: "1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:27 crc kubenswrapper[5118]: I1208 17:44:27.263467 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:27 crc kubenswrapper[5118]: E1208 17:44:27.263585 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:27.763561082 +0000 UTC m=+124.664885176 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:27 crc kubenswrapper[5118]: I1208 17:44:27.263959 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-s6hn4\" (UID: \"1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc\") " pod="openshift-image-registry/image-registry-66587d64c8-s6hn4" Dec 08 17:44:27 crc kubenswrapper[5118]: E1208 17:44:27.264304 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:27.764290262 +0000 UTC m=+124.665614356 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-s6hn4" (UID: "1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:27 crc kubenswrapper[5118]: I1208 17:44:27.357061 5118 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-rscz2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 17:44:27 crc kubenswrapper[5118]: [-]has-synced failed: reason withheld Dec 08 17:44:27 crc kubenswrapper[5118]: [+]process-running ok Dec 08 17:44:27 crc kubenswrapper[5118]: healthz check failed Dec 08 17:44:27 crc kubenswrapper[5118]: I1208 17:44:27.357127 5118 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-rscz2" podUID="fe85cb02-2d21-4fc3-92c1-6d060a006011" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 17:44:27 crc kubenswrapper[5118]: I1208 17:44:27.365000 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:27 crc kubenswrapper[5118]: E1208 17:44:27.365500 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:27.865483012 +0000 UTC m=+124.766807106 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:27 crc kubenswrapper[5118]: I1208 17:44:27.505672 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-s6hn4\" (UID: \"1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc\") " pod="openshift-image-registry/image-registry-66587d64c8-s6hn4" Dec 08 17:44:27 crc kubenswrapper[5118]: E1208 17:44:27.506155 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:28.006134648 +0000 UTC m=+124.907458742 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-s6hn4" (UID: "1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:27 crc kubenswrapper[5118]: I1208 17:44:27.538417 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29420250-qhrfp" Dec 08 17:44:27 crc kubenswrapper[5118]: I1208 17:44:27.538950 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29420250-qhrfp" event={"ID":"742843af-c521-4d4a-beea-e6feae8140e1","Type":"ContainerDied","Data":"e7e4e5294ae9ba605e56ade0a4247be53479964b4053089fd141b6910e3a9015"} Dec 08 17:44:27 crc kubenswrapper[5118]: I1208 17:44:27.538980 5118 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e7e4e5294ae9ba605e56ade0a4247be53479964b4053089fd141b6910e3a9015" Dec 08 17:44:27 crc kubenswrapper[5118]: I1208 17:44:27.555306 5118 generic.go:358] "Generic (PLEG): container finished" podID="ba520484-b334-4e08-8f1a-5eb554b62dc4" containerID="fa91142ce831173794c13aefc78337545548d8dcf1cc288b83917ac7236eb69e" exitCode=0 Dec 08 17:44:27 crc kubenswrapper[5118]: I1208 17:44:27.555429 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w7jrs" event={"ID":"ba520484-b334-4e08-8f1a-5eb554b62dc4","Type":"ContainerDied","Data":"fa91142ce831173794c13aefc78337545548d8dcf1cc288b83917ac7236eb69e"} Dec 08 17:44:27 crc kubenswrapper[5118]: I1208 17:44:27.555479 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w7jrs" event={"ID":"ba520484-b334-4e08-8f1a-5eb554b62dc4","Type":"ContainerStarted","Data":"79fd674b2f1982666d841b20537687d86fe6bb801a03c4ed53a6f95d3bc986ac"} Dec 08 17:44:27 crc kubenswrapper[5118]: I1208 17:44:27.573949 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"c683e0b8-bb8e-4012-80e0-a07cbd5b9cf6","Type":"ContainerStarted","Data":"6484bd7b032c9d6e599d717f90c2f5e904a6887890fd7f5402f2f861c0c5e1a1"} Dec 08 17:44:27 crc kubenswrapper[5118]: I1208 17:44:27.593506 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-5777786469-v69x6" Dec 08 17:44:27 crc kubenswrapper[5118]: I1208 17:44:27.607661 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:27 crc kubenswrapper[5118]: E1208 17:44:27.608787 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:28.108770498 +0000 UTC m=+125.010094592 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:27 crc kubenswrapper[5118]: I1208 17:44:27.608818 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Dec 08 17:44:27 crc kubenswrapper[5118]: I1208 17:44:27.609395 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="742843af-c521-4d4a-beea-e6feae8140e1" containerName="collect-profiles" Dec 08 17:44:27 crc kubenswrapper[5118]: I1208 17:44:27.609414 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="742843af-c521-4d4a-beea-e6feae8140e1" containerName="collect-profiles" Dec 08 17:44:27 crc kubenswrapper[5118]: I1208 17:44:27.609512 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="742843af-c521-4d4a-beea-e6feae8140e1" containerName="collect-profiles" Dec 08 17:44:27 crc kubenswrapper[5118]: I1208 17:44:27.622383 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 08 17:44:27 crc kubenswrapper[5118]: I1208 17:44:27.627639 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Dec 08 17:44:27 crc kubenswrapper[5118]: I1208 17:44:27.628495 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver\"/\"installer-sa-dockercfg-bqqnb\"" Dec 08 17:44:27 crc kubenswrapper[5118]: I1208 17:44:27.628645 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver\"/\"kube-root-ca.crt\"" Dec 08 17:44:27 crc kubenswrapper[5118]: I1208 17:44:27.651652 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/revision-pruner-6-crc" podStartSLOduration=2.651633417 podStartE2EDuration="2.651633417s" podCreationTimestamp="2025-12-08 17:44:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:44:27.65137604 +0000 UTC m=+124.552700134" watchObservedRunningTime="2025-12-08 17:44:27.651633417 +0000 UTC m=+124.552957511" Dec 08 17:44:27 crc kubenswrapper[5118]: I1208 17:44:27.711787 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/46f67036-aba9-49da-a298-d68e56b91e00-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"46f67036-aba9-49da-a298-d68e56b91e00\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 08 17:44:27 crc kubenswrapper[5118]: I1208 17:44:27.711935 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/46f67036-aba9-49da-a298-d68e56b91e00-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"46f67036-aba9-49da-a298-d68e56b91e00\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 08 17:44:27 crc kubenswrapper[5118]: I1208 17:44:27.712088 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-s6hn4\" (UID: \"1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc\") " pod="openshift-image-registry/image-registry-66587d64c8-s6hn4" Dec 08 17:44:27 crc kubenswrapper[5118]: E1208 17:44:27.719985 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:28.219968142 +0000 UTC m=+125.121292226 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-s6hn4" (UID: "1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:27 crc kubenswrapper[5118]: I1208 17:44:27.801337 5118 ???:1] "http: TLS handshake error from 192.168.126.11:53964: no serving certificate available for the kubelet" Dec 08 17:44:27 crc kubenswrapper[5118]: I1208 17:44:27.813639 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:27 crc kubenswrapper[5118]: I1208 17:44:27.813924 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/46f67036-aba9-49da-a298-d68e56b91e00-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"46f67036-aba9-49da-a298-d68e56b91e00\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 08 17:44:27 crc kubenswrapper[5118]: I1208 17:44:27.814075 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/46f67036-aba9-49da-a298-d68e56b91e00-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"46f67036-aba9-49da-a298-d68e56b91e00\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 08 17:44:27 crc kubenswrapper[5118]: I1208 17:44:27.814427 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/46f67036-aba9-49da-a298-d68e56b91e00-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"46f67036-aba9-49da-a298-d68e56b91e00\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 08 17:44:27 crc kubenswrapper[5118]: E1208 17:44:27.814484 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:28.314455978 +0000 UTC m=+125.215780062 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:27 crc kubenswrapper[5118]: I1208 17:44:27.854384 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/46f67036-aba9-49da-a298-d68e56b91e00-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"46f67036-aba9-49da-a298-d68e56b91e00\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 08 17:44:27 crc kubenswrapper[5118]: I1208 17:44:27.917775 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-s6hn4\" (UID: \"1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc\") " pod="openshift-image-registry/image-registry-66587d64c8-s6hn4" Dec 08 17:44:27 crc kubenswrapper[5118]: E1208 17:44:27.918092 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:28.418080515 +0000 UTC m=+125.319404609 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-s6hn4" (UID: "1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:27 crc kubenswrapper[5118]: I1208 17:44:27.956620 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-8596bd845d-rdv9c" Dec 08 17:44:27 crc kubenswrapper[5118]: I1208 17:44:27.956674 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-oauth-apiserver/apiserver-8596bd845d-rdv9c" Dec 08 17:44:27 crc kubenswrapper[5118]: I1208 17:44:27.974774 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-8596bd845d-rdv9c" Dec 08 17:44:28 crc kubenswrapper[5118]: I1208 17:44:28.008737 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 08 17:44:28 crc kubenswrapper[5118]: I1208 17:44:28.024533 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:28 crc kubenswrapper[5118]: E1208 17:44:28.026097 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:28.526066211 +0000 UTC m=+125.427390315 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:28 crc kubenswrapper[5118]: I1208 17:44:28.126736 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-s6hn4\" (UID: \"1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc\") " pod="openshift-image-registry/image-registry-66587d64c8-s6hn4" Dec 08 17:44:28 crc kubenswrapper[5118]: E1208 17:44:28.127438 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:28.627410565 +0000 UTC m=+125.528734659 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-s6hn4" (UID: "1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:28 crc kubenswrapper[5118]: I1208 17:44:28.231334 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:28 crc kubenswrapper[5118]: E1208 17:44:28.231499 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:28.731468113 +0000 UTC m=+125.632792217 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:28 crc kubenswrapper[5118]: I1208 17:44:28.232010 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-s6hn4\" (UID: \"1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc\") " pod="openshift-image-registry/image-registry-66587d64c8-s6hn4" Dec 08 17:44:28 crc kubenswrapper[5118]: E1208 17:44:28.232295 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:28.732287265 +0000 UTC m=+125.633611359 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-s6hn4" (UID: "1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:28 crc kubenswrapper[5118]: I1208 17:44:28.264252 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Dec 08 17:44:28 crc kubenswrapper[5118]: I1208 17:44:28.334057 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:28 crc kubenswrapper[5118]: E1208 17:44:28.334249 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:28.834221216 +0000 UTC m=+125.735545310 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:28 crc kubenswrapper[5118]: I1208 17:44:28.334343 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-s6hn4\" (UID: \"1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc\") " pod="openshift-image-registry/image-registry-66587d64c8-s6hn4" Dec 08 17:44:28 crc kubenswrapper[5118]: E1208 17:44:28.334816 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:28.834800652 +0000 UTC m=+125.736124746 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-s6hn4" (UID: "1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:28 crc kubenswrapper[5118]: I1208 17:44:28.349950 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ingress/router-default-68cf44c8b8-rscz2" Dec 08 17:44:28 crc kubenswrapper[5118]: I1208 17:44:28.353376 5118 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-rscz2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 17:44:28 crc kubenswrapper[5118]: [-]has-synced failed: reason withheld Dec 08 17:44:28 crc kubenswrapper[5118]: [+]process-running ok Dec 08 17:44:28 crc kubenswrapper[5118]: healthz check failed Dec 08 17:44:28 crc kubenswrapper[5118]: I1208 17:44:28.353438 5118 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-rscz2" podUID="fe85cb02-2d21-4fc3-92c1-6d060a006011" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 17:44:28 crc kubenswrapper[5118]: I1208 17:44:28.437049 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:28 crc kubenswrapper[5118]: E1208 17:44:28.437287 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:28.937256277 +0000 UTC m=+125.838580371 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:28 crc kubenswrapper[5118]: I1208 17:44:28.437407 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-s6hn4\" (UID: \"1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc\") " pod="openshift-image-registry/image-registry-66587d64c8-s6hn4" Dec 08 17:44:28 crc kubenswrapper[5118]: E1208 17:44:28.437764 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:28.9377467 +0000 UTC m=+125.839070794 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-s6hn4" (UID: "1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:28 crc kubenswrapper[5118]: I1208 17:44:28.450828 5118 patch_prober.go:28] interesting pod/downloads-747b44746d-x7wvx container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.35:8080/\": dial tcp 10.217.0.35:8080: connect: connection refused" start-of-body= Dec 08 17:44:28 crc kubenswrapper[5118]: I1208 17:44:28.450938 5118 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-x7wvx" podUID="39c08b26-3404-4ffd-a53a-c86f0c654db7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.35:8080/\": dial tcp 10.217.0.35:8080: connect: connection refused" Dec 08 17:44:28 crc kubenswrapper[5118]: I1208 17:44:28.459037 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-apiserver/apiserver-9ddfb9f55-8h8fl" Dec 08 17:44:28 crc kubenswrapper[5118]: I1208 17:44:28.459097 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-9ddfb9f55-8h8fl" Dec 08 17:44:28 crc kubenswrapper[5118]: I1208 17:44:28.474299 5118 patch_prober.go:28] interesting pod/apiserver-9ddfb9f55-8h8fl container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Dec 08 17:44:28 crc kubenswrapper[5118]: [+]log ok Dec 08 17:44:28 crc kubenswrapper[5118]: [+]etcd ok Dec 08 17:44:28 crc kubenswrapper[5118]: [+]poststarthook/start-apiserver-admission-initializer ok Dec 08 17:44:28 crc kubenswrapper[5118]: [+]poststarthook/generic-apiserver-start-informers ok Dec 08 17:44:28 crc kubenswrapper[5118]: [+]poststarthook/max-in-flight-filter ok Dec 08 17:44:28 crc kubenswrapper[5118]: [+]poststarthook/storage-object-count-tracker-hook ok Dec 08 17:44:28 crc kubenswrapper[5118]: [+]poststarthook/image.openshift.io-apiserver-caches ok Dec 08 17:44:28 crc kubenswrapper[5118]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Dec 08 17:44:28 crc kubenswrapper[5118]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Dec 08 17:44:28 crc kubenswrapper[5118]: [+]poststarthook/project.openshift.io-projectcache ok Dec 08 17:44:28 crc kubenswrapper[5118]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Dec 08 17:44:28 crc kubenswrapper[5118]: [+]poststarthook/openshift.io-startinformers ok Dec 08 17:44:28 crc kubenswrapper[5118]: [+]poststarthook/openshift.io-restmapperupdater ok Dec 08 17:44:28 crc kubenswrapper[5118]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Dec 08 17:44:28 crc kubenswrapper[5118]: livez check failed Dec 08 17:44:28 crc kubenswrapper[5118]: I1208 17:44:28.474372 5118 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-9ddfb9f55-8h8fl" podUID="695dd41c-159e-4e22-98e5-e27fdf4296fd" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 17:44:28 crc kubenswrapper[5118]: I1208 17:44:28.538343 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:28 crc kubenswrapper[5118]: E1208 17:44:28.538850 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:29.038831097 +0000 UTC m=+125.940155191 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:28 crc kubenswrapper[5118]: I1208 17:44:28.586402 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"46f67036-aba9-49da-a298-d68e56b91e00","Type":"ContainerStarted","Data":"76b530a11ce10f34f5cdd09e5e49ebd752f7193db3ca9a4a4a8a5819dfcecf1d"} Dec 08 17:44:28 crc kubenswrapper[5118]: I1208 17:44:28.593240 5118 generic.go:358] "Generic (PLEG): container finished" podID="c683e0b8-bb8e-4012-80e0-a07cbd5b9cf6" containerID="6484bd7b032c9d6e599d717f90c2f5e904a6887890fd7f5402f2f861c0c5e1a1" exitCode=0 Dec 08 17:44:28 crc kubenswrapper[5118]: I1208 17:44:28.593335 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"c683e0b8-bb8e-4012-80e0-a07cbd5b9cf6","Type":"ContainerDied","Data":"6484bd7b032c9d6e599d717f90c2f5e904a6887890fd7f5402f2f861c0c5e1a1"} Dec 08 17:44:28 crc kubenswrapper[5118]: I1208 17:44:28.597207 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-qrls7" event={"ID":"b81b63fd-c7d6-4446-ab93-c62912586002","Type":"ContainerStarted","Data":"eeca3f44ddaa821404c18a1d41ebc4f3f20b9eddadfcb7e3ef48ffae53026b8c"} Dec 08 17:44:28 crc kubenswrapper[5118]: I1208 17:44:28.603512 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-8596bd845d-rdv9c" Dec 08 17:44:28 crc kubenswrapper[5118]: I1208 17:44:28.641188 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-s6hn4\" (UID: \"1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc\") " pod="openshift-image-registry/image-registry-66587d64c8-s6hn4" Dec 08 17:44:28 crc kubenswrapper[5118]: E1208 17:44:28.641517 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:29.141503417 +0000 UTC m=+126.042827511 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-s6hn4" (UID: "1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:28 crc kubenswrapper[5118]: I1208 17:44:28.743018 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:28 crc kubenswrapper[5118]: E1208 17:44:28.743842 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:29.243800928 +0000 UTC m=+126.145125022 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:28 crc kubenswrapper[5118]: I1208 17:44:28.850654 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-s6hn4\" (UID: \"1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc\") " pod="openshift-image-registry/image-registry-66587d64c8-s6hn4" Dec 08 17:44:28 crc kubenswrapper[5118]: E1208 17:44:28.851213 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:29.351189297 +0000 UTC m=+126.252513391 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-s6hn4" (UID: "1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:28 crc kubenswrapper[5118]: I1208 17:44:28.959580 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:28 crc kubenswrapper[5118]: E1208 17:44:28.959828 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:29.459777549 +0000 UTC m=+126.361101643 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:28 crc kubenswrapper[5118]: I1208 17:44:28.960602 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-s6hn4\" (UID: \"1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc\") " pod="openshift-image-registry/image-registry-66587d64c8-s6hn4" Dec 08 17:44:28 crc kubenswrapper[5118]: E1208 17:44:28.961219 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:29.461207909 +0000 UTC m=+126.362532003 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-s6hn4" (UID: "1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:29 crc kubenswrapper[5118]: I1208 17:44:29.065269 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:29 crc kubenswrapper[5118]: E1208 17:44:29.065707 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:29.565682098 +0000 UTC m=+126.467006192 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:29 crc kubenswrapper[5118]: I1208 17:44:29.167110 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-s6hn4\" (UID: \"1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc\") " pod="openshift-image-registry/image-registry-66587d64c8-s6hn4" Dec 08 17:44:29 crc kubenswrapper[5118]: E1208 17:44:29.167678 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:29.667660849 +0000 UTC m=+126.568984943 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-s6hn4" (UID: "1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:29 crc kubenswrapper[5118]: I1208 17:44:29.176734 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/console-64d44f6ddf-dhfvx" Dec 08 17:44:29 crc kubenswrapper[5118]: I1208 17:44:29.176795 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-64d44f6ddf-dhfvx" Dec 08 17:44:29 crc kubenswrapper[5118]: I1208 17:44:29.187522 5118 patch_prober.go:28] interesting pod/console-64d44f6ddf-dhfvx container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.41:8443/health\": dial tcp 10.217.0.41:8443: connect: connection refused" start-of-body= Dec 08 17:44:29 crc kubenswrapper[5118]: I1208 17:44:29.187597 5118 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-console/console-64d44f6ddf-dhfvx" podUID="a272b1fd-864b-4107-a4fd-6f6ab82a1d34" containerName="console" probeResult="failure" output="Get \"https://10.217.0.41:8443/health\": dial tcp 10.217.0.41:8443: connect: connection refused" Dec 08 17:44:29 crc kubenswrapper[5118]: I1208 17:44:29.268216 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:29 crc kubenswrapper[5118]: E1208 17:44:29.268440 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:29.768408818 +0000 UTC m=+126.669732912 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:29 crc kubenswrapper[5118]: I1208 17:44:29.268954 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-s6hn4\" (UID: \"1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc\") " pod="openshift-image-registry/image-registry-66587d64c8-s6hn4" Dec 08 17:44:29 crc kubenswrapper[5118]: E1208 17:44:29.269406 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:29.769388315 +0000 UTC m=+126.670712409 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-s6hn4" (UID: "1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:29 crc kubenswrapper[5118]: I1208 17:44:29.352500 5118 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-rscz2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 17:44:29 crc kubenswrapper[5118]: [-]has-synced failed: reason withheld Dec 08 17:44:29 crc kubenswrapper[5118]: [+]process-running ok Dec 08 17:44:29 crc kubenswrapper[5118]: healthz check failed Dec 08 17:44:29 crc kubenswrapper[5118]: I1208 17:44:29.352580 5118 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-rscz2" podUID="fe85cb02-2d21-4fc3-92c1-6d060a006011" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 17:44:29 crc kubenswrapper[5118]: I1208 17:44:29.370938 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:29 crc kubenswrapper[5118]: E1208 17:44:29.371350 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:29.871334475 +0000 UTC m=+126.772658559 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:29 crc kubenswrapper[5118]: I1208 17:44:29.474411 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-s6hn4\" (UID: \"1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc\") " pod="openshift-image-registry/image-registry-66587d64c8-s6hn4" Dec 08 17:44:29 crc kubenswrapper[5118]: E1208 17:44:29.474866 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:29.974850389 +0000 UTC m=+126.876174483 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-s6hn4" (UID: "1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:29 crc kubenswrapper[5118]: I1208 17:44:29.575978 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:29 crc kubenswrapper[5118]: E1208 17:44:29.576208 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:30.076142702 +0000 UTC m=+126.977466796 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:29 crc kubenswrapper[5118]: I1208 17:44:29.576382 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-s6hn4\" (UID: \"1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc\") " pod="openshift-image-registry/image-registry-66587d64c8-s6hn4" Dec 08 17:44:29 crc kubenswrapper[5118]: E1208 17:44:29.576952 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:30.076944403 +0000 UTC m=+126.978268497 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-s6hn4" (UID: "1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:29 crc kubenswrapper[5118]: I1208 17:44:29.617542 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"46f67036-aba9-49da-a298-d68e56b91e00","Type":"ContainerStarted","Data":"1137d339dbeed31b4e8f8182d538e13f79956d683ff0249d8ca00e46c031d0da"} Dec 08 17:44:29 crc kubenswrapper[5118]: I1208 17:44:29.631836 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-11-crc" podStartSLOduration=2.631818401 podStartE2EDuration="2.631818401s" podCreationTimestamp="2025-12-08 17:44:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:44:29.630804183 +0000 UTC m=+126.532128277" watchObservedRunningTime="2025-12-08 17:44:29.631818401 +0000 UTC m=+126.533142505" Dec 08 17:44:29 crc kubenswrapper[5118]: I1208 17:44:29.679021 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:29 crc kubenswrapper[5118]: E1208 17:44:29.679242 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:30.179218434 +0000 UTC m=+127.080542528 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:29 crc kubenswrapper[5118]: I1208 17:44:29.679902 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-s6hn4\" (UID: \"1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc\") " pod="openshift-image-registry/image-registry-66587d64c8-s6hn4" Dec 08 17:44:29 crc kubenswrapper[5118]: E1208 17:44:29.680590 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:30.18057556 +0000 UTC m=+127.081899654 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-s6hn4" (UID: "1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:29 crc kubenswrapper[5118]: I1208 17:44:29.781748 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:29 crc kubenswrapper[5118]: E1208 17:44:29.781953 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:30.281921825 +0000 UTC m=+127.183245919 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:29 crc kubenswrapper[5118]: I1208 17:44:29.782167 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-s6hn4\" (UID: \"1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc\") " pod="openshift-image-registry/image-registry-66587d64c8-s6hn4" Dec 08 17:44:29 crc kubenswrapper[5118]: E1208 17:44:29.783448 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:30.283427486 +0000 UTC m=+127.184751580 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-s6hn4" (UID: "1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:29 crc kubenswrapper[5118]: I1208 17:44:29.884303 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:29 crc kubenswrapper[5118]: E1208 17:44:29.884518 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:30.384484672 +0000 UTC m=+127.285808766 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:29 crc kubenswrapper[5118]: I1208 17:44:29.885389 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-s6hn4\" (UID: \"1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc\") " pod="openshift-image-registry/image-registry-66587d64c8-s6hn4" Dec 08 17:44:29 crc kubenswrapper[5118]: E1208 17:44:29.885737 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:30.385722737 +0000 UTC m=+127.287046831 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-s6hn4" (UID: "1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:29 crc kubenswrapper[5118]: I1208 17:44:29.914988 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 08 17:44:29 crc kubenswrapper[5118]: I1208 17:44:29.986238 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:29 crc kubenswrapper[5118]: E1208 17:44:29.986476 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:30.486384582 +0000 UTC m=+127.387708676 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:29 crc kubenswrapper[5118]: I1208 17:44:29.987882 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-s6hn4\" (UID: \"1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc\") " pod="openshift-image-registry/image-registry-66587d64c8-s6hn4" Dec 08 17:44:29 crc kubenswrapper[5118]: E1208 17:44:29.988653 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:30.488587912 +0000 UTC m=+127.389912016 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-s6hn4" (UID: "1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:30 crc kubenswrapper[5118]: I1208 17:44:30.089304 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c683e0b8-bb8e-4012-80e0-a07cbd5b9cf6-kube-api-access\") pod \"c683e0b8-bb8e-4012-80e0-a07cbd5b9cf6\" (UID: \"c683e0b8-bb8e-4012-80e0-a07cbd5b9cf6\") " Dec 08 17:44:30 crc kubenswrapper[5118]: I1208 17:44:30.089432 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:30 crc kubenswrapper[5118]: I1208 17:44:30.089471 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c683e0b8-bb8e-4012-80e0-a07cbd5b9cf6-kubelet-dir\") pod \"c683e0b8-bb8e-4012-80e0-a07cbd5b9cf6\" (UID: \"c683e0b8-bb8e-4012-80e0-a07cbd5b9cf6\") " Dec 08 17:44:30 crc kubenswrapper[5118]: I1208 17:44:30.089583 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c683e0b8-bb8e-4012-80e0-a07cbd5b9cf6-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "c683e0b8-bb8e-4012-80e0-a07cbd5b9cf6" (UID: "c683e0b8-bb8e-4012-80e0-a07cbd5b9cf6"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 17:44:30 crc kubenswrapper[5118]: E1208 17:44:30.089643 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:30.589593837 +0000 UTC m=+127.490917931 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:30 crc kubenswrapper[5118]: I1208 17:44:30.090109 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-s6hn4\" (UID: \"1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc\") " pod="openshift-image-registry/image-registry-66587d64c8-s6hn4" Dec 08 17:44:30 crc kubenswrapper[5118]: E1208 17:44:30.090492 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:30.590475302 +0000 UTC m=+127.491799476 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-s6hn4" (UID: "1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:30 crc kubenswrapper[5118]: I1208 17:44:30.090533 5118 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c683e0b8-bb8e-4012-80e0-a07cbd5b9cf6-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:30 crc kubenswrapper[5118]: I1208 17:44:30.100508 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c683e0b8-bb8e-4012-80e0-a07cbd5b9cf6-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c683e0b8-bb8e-4012-80e0-a07cbd5b9cf6" (UID: "c683e0b8-bb8e-4012-80e0-a07cbd5b9cf6"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:44:30 crc kubenswrapper[5118]: I1208 17:44:30.192681 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:30 crc kubenswrapper[5118]: E1208 17:44:30.192830 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:30.692801892 +0000 UTC m=+127.594125986 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:30 crc kubenswrapper[5118]: I1208 17:44:30.192970 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-s6hn4\" (UID: \"1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc\") " pod="openshift-image-registry/image-registry-66587d64c8-s6hn4" Dec 08 17:44:30 crc kubenswrapper[5118]: I1208 17:44:30.193308 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c683e0b8-bb8e-4012-80e0-a07cbd5b9cf6-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:30 crc kubenswrapper[5118]: E1208 17:44:30.193347 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:30.693330706 +0000 UTC m=+127.594654800 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-s6hn4" (UID: "1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:30 crc kubenswrapper[5118]: I1208 17:44:30.294605 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:30 crc kubenswrapper[5118]: E1208 17:44:30.294779 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:30.794752744 +0000 UTC m=+127.696076828 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:30 crc kubenswrapper[5118]: I1208 17:44:30.295285 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-s6hn4\" (UID: \"1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc\") " pod="openshift-image-registry/image-registry-66587d64c8-s6hn4" Dec 08 17:44:30 crc kubenswrapper[5118]: E1208 17:44:30.295607 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:30.795594796 +0000 UTC m=+127.696918890 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-s6hn4" (UID: "1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:30 crc kubenswrapper[5118]: I1208 17:44:30.309538 5118 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Dec 08 17:44:30 crc kubenswrapper[5118]: I1208 17:44:30.351837 5118 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-rscz2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 17:44:30 crc kubenswrapper[5118]: [-]has-synced failed: reason withheld Dec 08 17:44:30 crc kubenswrapper[5118]: [+]process-running ok Dec 08 17:44:30 crc kubenswrapper[5118]: healthz check failed Dec 08 17:44:30 crc kubenswrapper[5118]: I1208 17:44:30.351907 5118 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-rscz2" podUID="fe85cb02-2d21-4fc3-92c1-6d060a006011" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 17:44:30 crc kubenswrapper[5118]: I1208 17:44:30.388780 5118 ???:1] "http: TLS handshake error from 192.168.126.11:53972: no serving certificate available for the kubelet" Dec 08 17:44:30 crc kubenswrapper[5118]: I1208 17:44:30.396554 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:30 crc kubenswrapper[5118]: E1208 17:44:30.396750 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:30.896719375 +0000 UTC m=+127.798043459 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:30 crc kubenswrapper[5118]: I1208 17:44:30.396833 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-s6hn4\" (UID: \"1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc\") " pod="openshift-image-registry/image-registry-66587d64c8-s6hn4" Dec 08 17:44:30 crc kubenswrapper[5118]: I1208 17:44:30.397150 5118 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-12-08T17:44:30.309575468Z","UUID":"77437195-39a4-4416-9389-8f7ee22969c9","Handler":null,"Name":"","Endpoint":""} Dec 08 17:44:30 crc kubenswrapper[5118]: E1208 17:44:30.397251 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:30.897244889 +0000 UTC m=+127.798568983 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-s6hn4" (UID: "1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:30 crc kubenswrapper[5118]: I1208 17:44:30.403809 5118 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Dec 08 17:44:30 crc kubenswrapper[5118]: I1208 17:44:30.403843 5118 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Dec 08 17:44:30 crc kubenswrapper[5118]: I1208 17:44:30.498166 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:30 crc kubenswrapper[5118]: I1208 17:44:30.508474 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (OuterVolumeSpecName: "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2". PluginName "kubernetes.io/csi", VolumeGIDValue "" Dec 08 17:44:30 crc kubenswrapper[5118]: I1208 17:44:30.601960 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-s6hn4\" (UID: \"1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc\") " pod="openshift-image-registry/image-registry-66587d64c8-s6hn4" Dec 08 17:44:30 crc kubenswrapper[5118]: I1208 17:44:30.622911 5118 csi_attacher.go:373] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 08 17:44:30 crc kubenswrapper[5118]: I1208 17:44:30.622960 5118 operation_generator.go:557] "MountVolume.MountDevice succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-s6hn4\" (UID: \"1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b1264ac67579ad07e7e9003054d44fe40dd55285a4b2f7dc74e48be1aee0868a/globalmount\"" pod="openshift-image-registry/image-registry-66587d64c8-s6hn4" Dec 08 17:44:30 crc kubenswrapper[5118]: I1208 17:44:30.638047 5118 generic.go:358] "Generic (PLEG): container finished" podID="46f67036-aba9-49da-a298-d68e56b91e00" containerID="1137d339dbeed31b4e8f8182d538e13f79956d683ff0249d8ca00e46c031d0da" exitCode=0 Dec 08 17:44:30 crc kubenswrapper[5118]: I1208 17:44:30.638133 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"46f67036-aba9-49da-a298-d68e56b91e00","Type":"ContainerDied","Data":"1137d339dbeed31b4e8f8182d538e13f79956d683ff0249d8ca00e46c031d0da"} Dec 08 17:44:30 crc kubenswrapper[5118]: I1208 17:44:30.640181 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"c683e0b8-bb8e-4012-80e0-a07cbd5b9cf6","Type":"ContainerDied","Data":"1527bd07152c4241e773659ffece99cf6e3c5940dd78184bc260f9235526b2d0"} Dec 08 17:44:30 crc kubenswrapper[5118]: I1208 17:44:30.640241 5118 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1527bd07152c4241e773659ffece99cf6e3c5940dd78184bc260f9235526b2d0" Dec 08 17:44:30 crc kubenswrapper[5118]: I1208 17:44:30.640344 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 08 17:44:30 crc kubenswrapper[5118]: I1208 17:44:30.650361 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-s6hn4\" (UID: \"1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc\") " pod="openshift-image-registry/image-registry-66587d64c8-s6hn4" Dec 08 17:44:30 crc kubenswrapper[5118]: I1208 17:44:30.661402 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-qrls7" event={"ID":"b81b63fd-c7d6-4446-ab93-c62912586002","Type":"ContainerStarted","Data":"3a7bc588686a840a99daeb8b5533a1a7d825abf831a7825f5959aa8e54acc792"} Dec 08 17:44:30 crc kubenswrapper[5118]: I1208 17:44:30.784706 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-6w67b\"" Dec 08 17:44:30 crc kubenswrapper[5118]: I1208 17:44:30.793363 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-s6hn4" Dec 08 17:44:31 crc kubenswrapper[5118]: I1208 17:44:31.355727 5118 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-rscz2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 17:44:31 crc kubenswrapper[5118]: [-]has-synced failed: reason withheld Dec 08 17:44:31 crc kubenswrapper[5118]: [+]process-running ok Dec 08 17:44:31 crc kubenswrapper[5118]: healthz check failed Dec 08 17:44:31 crc kubenswrapper[5118]: I1208 17:44:31.355794 5118 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-rscz2" podUID="fe85cb02-2d21-4fc3-92c1-6d060a006011" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 17:44:31 crc kubenswrapper[5118]: I1208 17:44:31.436136 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e9b5059-1b3e-4067-a63d-2952cbe863af" path="/var/lib/kubelet/pods/9e9b5059-1b3e-4067-a63d-2952cbe863af/volumes" Dec 08 17:44:32 crc kubenswrapper[5118]: I1208 17:44:32.352402 5118 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-rscz2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 17:44:32 crc kubenswrapper[5118]: [-]has-synced failed: reason withheld Dec 08 17:44:32 crc kubenswrapper[5118]: [+]process-running ok Dec 08 17:44:32 crc kubenswrapper[5118]: healthz check failed Dec 08 17:44:32 crc kubenswrapper[5118]: I1208 17:44:32.352659 5118 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-rscz2" podUID="fe85cb02-2d21-4fc3-92c1-6d060a006011" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 17:44:32 crc kubenswrapper[5118]: I1208 17:44:32.925634 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 08 17:44:32 crc kubenswrapper[5118]: I1208 17:44:32.944853 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/46f67036-aba9-49da-a298-d68e56b91e00-kubelet-dir\") pod \"46f67036-aba9-49da-a298-d68e56b91e00\" (UID: \"46f67036-aba9-49da-a298-d68e56b91e00\") " Dec 08 17:44:32 crc kubenswrapper[5118]: I1208 17:44:32.944997 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/46f67036-aba9-49da-a298-d68e56b91e00-kube-api-access\") pod \"46f67036-aba9-49da-a298-d68e56b91e00\" (UID: \"46f67036-aba9-49da-a298-d68e56b91e00\") " Dec 08 17:44:32 crc kubenswrapper[5118]: I1208 17:44:32.944968 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/46f67036-aba9-49da-a298-d68e56b91e00-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "46f67036-aba9-49da-a298-d68e56b91e00" (UID: "46f67036-aba9-49da-a298-d68e56b91e00"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 17:44:32 crc kubenswrapper[5118]: I1208 17:44:32.945520 5118 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/46f67036-aba9-49da-a298-d68e56b91e00-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:32 crc kubenswrapper[5118]: I1208 17:44:32.969059 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46f67036-aba9-49da-a298-d68e56b91e00-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "46f67036-aba9-49da-a298-d68e56b91e00" (UID: "46f67036-aba9-49da-a298-d68e56b91e00"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:44:33 crc kubenswrapper[5118]: I1208 17:44:33.048279 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/46f67036-aba9-49da-a298-d68e56b91e00-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:33 crc kubenswrapper[5118]: E1208 17:44:33.108634 5118 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bf856c4ce39ea22a47c20ce3a893bb3e59b76de3a4a2519b97294ead8f9391d0" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 08 17:44:33 crc kubenswrapper[5118]: E1208 17:44:33.111317 5118 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bf856c4ce39ea22a47c20ce3a893bb3e59b76de3a4a2519b97294ead8f9391d0" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 08 17:44:33 crc kubenswrapper[5118]: E1208 17:44:33.113068 5118 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bf856c4ce39ea22a47c20ce3a893bb3e59b76de3a4a2519b97294ead8f9391d0" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 08 17:44:33 crc kubenswrapper[5118]: E1208 17:44:33.113112 5118 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-bdhnb" podUID="8ffbd09a-f2fa-4983-84c6-c6db6ccaac49" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Dec 08 17:44:33 crc kubenswrapper[5118]: I1208 17:44:33.352433 5118 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-rscz2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 17:44:33 crc kubenswrapper[5118]: [-]has-synced failed: reason withheld Dec 08 17:44:33 crc kubenswrapper[5118]: [+]process-running ok Dec 08 17:44:33 crc kubenswrapper[5118]: healthz check failed Dec 08 17:44:33 crc kubenswrapper[5118]: I1208 17:44:33.352493 5118 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-rscz2" podUID="fe85cb02-2d21-4fc3-92c1-6d060a006011" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 17:44:33 crc kubenswrapper[5118]: I1208 17:44:33.454280 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 17:44:33 crc kubenswrapper[5118]: I1208 17:44:33.454781 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 17:44:33 crc kubenswrapper[5118]: I1208 17:44:33.454851 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 17:44:33 crc kubenswrapper[5118]: I1208 17:44:33.455035 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 17:44:33 crc kubenswrapper[5118]: I1208 17:44:33.456054 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"networking-console-plugin-cert\"" Dec 08 17:44:33 crc kubenswrapper[5118]: I1208 17:44:33.456328 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-console\"/\"networking-console-plugin\"" Dec 08 17:44:33 crc kubenswrapper[5118]: I1208 17:44:33.456475 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Dec 08 17:44:33 crc kubenswrapper[5118]: I1208 17:44:33.463910 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-9ddfb9f55-8h8fl" Dec 08 17:44:33 crc kubenswrapper[5118]: I1208 17:44:33.466891 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Dec 08 17:44:33 crc kubenswrapper[5118]: I1208 17:44:33.467626 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 17:44:33 crc kubenswrapper[5118]: I1208 17:44:33.470418 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-9ddfb9f55-8h8fl" Dec 08 17:44:33 crc kubenswrapper[5118]: I1208 17:44:33.478667 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 17:44:33 crc kubenswrapper[5118]: I1208 17:44:33.479200 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 17:44:33 crc kubenswrapper[5118]: I1208 17:44:33.479296 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 17:44:33 crc kubenswrapper[5118]: I1208 17:44:33.651467 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 17:44:33 crc kubenswrapper[5118]: I1208 17:44:33.665616 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 17:44:33 crc kubenswrapper[5118]: I1208 17:44:33.680689 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 08 17:44:33 crc kubenswrapper[5118]: I1208 17:44:33.681075 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"46f67036-aba9-49da-a298-d68e56b91e00","Type":"ContainerDied","Data":"76b530a11ce10f34f5cdd09e5e49ebd752f7193db3ca9a4a4a8a5819dfcecf1d"} Dec 08 17:44:33 crc kubenswrapper[5118]: I1208 17:44:33.681117 5118 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="76b530a11ce10f34f5cdd09e5e49ebd752f7193db3ca9a4a4a8a5819dfcecf1d" Dec 08 17:44:33 crc kubenswrapper[5118]: I1208 17:44:33.681273 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 17:44:34 crc kubenswrapper[5118]: I1208 17:44:34.351593 5118 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-rscz2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 17:44:34 crc kubenswrapper[5118]: [-]has-synced failed: reason withheld Dec 08 17:44:34 crc kubenswrapper[5118]: [+]process-running ok Dec 08 17:44:34 crc kubenswrapper[5118]: healthz check failed Dec 08 17:44:34 crc kubenswrapper[5118]: I1208 17:44:34.351646 5118 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-rscz2" podUID="fe85cb02-2d21-4fc3-92c1-6d060a006011" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 17:44:35 crc kubenswrapper[5118]: I1208 17:44:35.274848 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-747b44746d-x7wvx" Dec 08 17:44:35 crc kubenswrapper[5118]: I1208 17:44:35.352381 5118 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-rscz2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 17:44:35 crc kubenswrapper[5118]: [-]has-synced failed: reason withheld Dec 08 17:44:35 crc kubenswrapper[5118]: [+]process-running ok Dec 08 17:44:35 crc kubenswrapper[5118]: healthz check failed Dec 08 17:44:35 crc kubenswrapper[5118]: I1208 17:44:35.352458 5118 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-rscz2" podUID="fe85cb02-2d21-4fc3-92c1-6d060a006011" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 17:44:35 crc kubenswrapper[5118]: I1208 17:44:35.440647 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-c5tbq" Dec 08 17:44:35 crc kubenswrapper[5118]: I1208 17:44:35.530169 5118 ???:1] "http: TLS handshake error from 192.168.126.11:42918: no serving certificate available for the kubelet" Dec 08 17:44:36 crc kubenswrapper[5118]: I1208 17:44:36.352236 5118 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-rscz2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 17:44:36 crc kubenswrapper[5118]: [-]has-synced failed: reason withheld Dec 08 17:44:36 crc kubenswrapper[5118]: [+]process-running ok Dec 08 17:44:36 crc kubenswrapper[5118]: healthz check failed Dec 08 17:44:36 crc kubenswrapper[5118]: I1208 17:44:36.352318 5118 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-rscz2" podUID="fe85cb02-2d21-4fc3-92c1-6d060a006011" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 17:44:37 crc kubenswrapper[5118]: I1208 17:44:37.351386 5118 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-rscz2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 17:44:37 crc kubenswrapper[5118]: [-]has-synced failed: reason withheld Dec 08 17:44:37 crc kubenswrapper[5118]: [+]process-running ok Dec 08 17:44:37 crc kubenswrapper[5118]: healthz check failed Dec 08 17:44:37 crc kubenswrapper[5118]: I1208 17:44:37.351719 5118 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-rscz2" podUID="fe85cb02-2d21-4fc3-92c1-6d060a006011" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 17:44:38 crc kubenswrapper[5118]: I1208 17:44:38.352434 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-68cf44c8b8-rscz2" Dec 08 17:44:38 crc kubenswrapper[5118]: I1208 17:44:38.355894 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-68cf44c8b8-rscz2" Dec 08 17:44:39 crc kubenswrapper[5118]: I1208 17:44:39.176726 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-64d44f6ddf-dhfvx" Dec 08 17:44:39 crc kubenswrapper[5118]: I1208 17:44:39.187107 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-64d44f6ddf-dhfvx" Dec 08 17:44:40 crc kubenswrapper[5118]: I1208 17:44:40.527334 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-wr4x4" Dec 08 17:44:41 crc kubenswrapper[5118]: I1208 17:44:41.345460 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-s6hn4"] Dec 08 17:44:41 crc kubenswrapper[5118]: W1208 17:44:41.508269 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf863fff9_286a_45fa_b8f0_8a86994b8440.slice/crio-b0fdb2b61c63d63f0329ade1afeea4b4caf60c7102ce0dc1b6283051f89919e7 WatchSource:0}: Error finding container b0fdb2b61c63d63f0329ade1afeea4b4caf60c7102ce0dc1b6283051f89919e7: Status 404 returned error can't find the container with id b0fdb2b61c63d63f0329ade1afeea4b4caf60c7102ce0dc1b6283051f89919e7 Dec 08 17:44:41 crc kubenswrapper[5118]: W1208 17:44:41.539629 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod17b87002_b798_480a_8e17_83053d698239.slice/crio-84afb7c8a9e13df098ddcb53b85dae340e7050cd4bd41b251d4fc9612abe78c4 WatchSource:0}: Error finding container 84afb7c8a9e13df098ddcb53b85dae340e7050cd4bd41b251d4fc9612abe78c4: Status 404 returned error can't find the container with id 84afb7c8a9e13df098ddcb53b85dae340e7050cd4bd41b251d4fc9612abe78c4 Dec 08 17:44:41 crc kubenswrapper[5118]: I1208 17:44:41.718421 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" event={"ID":"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141","Type":"ContainerStarted","Data":"1ba4ac8b3e5261e35866bcc88a4e9fdb4766766105be6ee4c4056b67467c9190"} Dec 08 17:44:41 crc kubenswrapper[5118]: I1208 17:44:41.719550 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-s6hn4" event={"ID":"1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc","Type":"ContainerStarted","Data":"ef2774eb27b084c192ab2fbfe7c52e1babc8bccadb79956c3c83e557c0e28270"} Dec 08 17:44:41 crc kubenswrapper[5118]: I1208 17:44:41.720524 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" event={"ID":"f863fff9-286a-45fa-b8f0-8a86994b8440","Type":"ContainerStarted","Data":"b0fdb2b61c63d63f0329ade1afeea4b4caf60c7102ce0dc1b6283051f89919e7"} Dec 08 17:44:41 crc kubenswrapper[5118]: I1208 17:44:41.721621 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" event={"ID":"17b87002-b798-480a-8e17-83053d698239","Type":"ContainerStarted","Data":"84afb7c8a9e13df098ddcb53b85dae340e7050cd4bd41b251d4fc9612abe78c4"} Dec 08 17:44:42 crc kubenswrapper[5118]: I1208 17:44:42.737316 5118 generic.go:358] "Generic (PLEG): container finished" podID="fe467668-8954-4465-87ca-ef1d5f933d43" containerID="50a485ccd5872019fccd3a3aa2c2e0e5a919d6c131eccce9de8984195ed1c610" exitCode=0 Dec 08 17:44:42 crc kubenswrapper[5118]: I1208 17:44:42.737367 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rvglb" event={"ID":"fe467668-8954-4465-87ca-ef1d5f933d43","Type":"ContainerDied","Data":"50a485ccd5872019fccd3a3aa2c2e0e5a919d6c131eccce9de8984195ed1c610"} Dec 08 17:44:42 crc kubenswrapper[5118]: I1208 17:44:42.742286 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-qrls7" event={"ID":"b81b63fd-c7d6-4446-ab93-c62912586002","Type":"ContainerStarted","Data":"0251e7bc1ccaf8fb949b0615e8031b06fcaa67939aa01462f00ecf4211593951"} Dec 08 17:44:42 crc kubenswrapper[5118]: I1208 17:44:42.744127 5118 generic.go:358] "Generic (PLEG): container finished" podID="8c05f773-74bd-433b-84ce-a7f5430d9b55" containerID="ef1557cb2a0d48009c41378fbfc2894fe108a20487cc6caf50911e04ec94ccbc" exitCode=0 Dec 08 17:44:42 crc kubenswrapper[5118]: I1208 17:44:42.744224 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n5vp7" event={"ID":"8c05f773-74bd-433b-84ce-a7f5430d9b55","Type":"ContainerDied","Data":"ef1557cb2a0d48009c41378fbfc2894fe108a20487cc6caf50911e04ec94ccbc"} Dec 08 17:44:42 crc kubenswrapper[5118]: I1208 17:44:42.746935 5118 generic.go:358] "Generic (PLEG): container finished" podID="caab7ab2-a04e-42fc-bd64-76c76ee3755d" containerID="dcceb3a7d3ba1066136e08a7add058961a3fb9c31b40f05bdd2ce3a8b5cb777c" exitCode=0 Dec 08 17:44:42 crc kubenswrapper[5118]: I1208 17:44:42.747052 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6m6rs" event={"ID":"caab7ab2-a04e-42fc-bd64-76c76ee3755d","Type":"ContainerDied","Data":"dcceb3a7d3ba1066136e08a7add058961a3fb9c31b40f05bdd2ce3a8b5cb777c"} Dec 08 17:44:42 crc kubenswrapper[5118]: I1208 17:44:42.750406 5118 generic.go:358] "Generic (PLEG): container finished" podID="e4f4fc3c-88d2-455a-a8d2-209388238c9a" containerID="9a2ab17843bd36c31f1adbe8492fc684a0aac918d650d0ea4e1be40b9884d37b" exitCode=0 Dec 08 17:44:42 crc kubenswrapper[5118]: I1208 17:44:42.750467 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sb7gg" event={"ID":"e4f4fc3c-88d2-455a-a8d2-209388238c9a","Type":"ContainerDied","Data":"9a2ab17843bd36c31f1adbe8492fc684a0aac918d650d0ea4e1be40b9884d37b"} Dec 08 17:44:42 crc kubenswrapper[5118]: I1208 17:44:42.753624 5118 generic.go:358] "Generic (PLEG): container finished" podID="cb8303fe-2019-44f4-a124-af174b28cc02" containerID="f6f2c5311c5f7c1b47e813d9109bbea34736ca2dcab8da2e32723d45e87698f1" exitCode=0 Dec 08 17:44:42 crc kubenswrapper[5118]: I1208 17:44:42.753718 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r22jf" event={"ID":"cb8303fe-2019-44f4-a124-af174b28cc02","Type":"ContainerDied","Data":"f6f2c5311c5f7c1b47e813d9109bbea34736ca2dcab8da2e32723d45e87698f1"} Dec 08 17:44:42 crc kubenswrapper[5118]: I1208 17:44:42.757446 5118 generic.go:358] "Generic (PLEG): container finished" podID="ba520484-b334-4e08-8f1a-5eb554b62dc4" containerID="3fa7a04a596e2eb73ff66ab98810782e6c228fd6e4d3c94762442646a4e1f704" exitCode=0 Dec 08 17:44:42 crc kubenswrapper[5118]: I1208 17:44:42.759048 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w7jrs" event={"ID":"ba520484-b334-4e08-8f1a-5eb554b62dc4","Type":"ContainerDied","Data":"3fa7a04a596e2eb73ff66ab98810782e6c228fd6e4d3c94762442646a4e1f704"} Dec 08 17:44:42 crc kubenswrapper[5118]: I1208 17:44:42.768286 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-s6hn4" event={"ID":"1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc","Type":"ContainerStarted","Data":"c9dc7606a0b78d2fd8ce9155a8194ba05acaed277dfbf4e936fa94958f67ac28"} Dec 08 17:44:42 crc kubenswrapper[5118]: I1208 17:44:42.768806 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-image-registry/image-registry-66587d64c8-s6hn4" Dec 08 17:44:42 crc kubenswrapper[5118]: I1208 17:44:42.772518 5118 generic.go:358] "Generic (PLEG): container finished" podID="fe8486ce-b0ff-43e5-b2a4-3e1d81feeebf" containerID="bfcd02986882237453589d99ce15d916e7b8cb95a5ed00570d6c66ff9c01fd58" exitCode=0 Dec 08 17:44:42 crc kubenswrapper[5118]: I1208 17:44:42.772588 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lxwl6" event={"ID":"fe8486ce-b0ff-43e5-b2a4-3e1d81feeebf","Type":"ContainerDied","Data":"bfcd02986882237453589d99ce15d916e7b8cb95a5ed00570d6c66ff9c01fd58"} Dec 08 17:44:42 crc kubenswrapper[5118]: I1208 17:44:42.882452 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66587d64c8-s6hn4" podStartSLOduration=119.882435653 podStartE2EDuration="1m59.882435653s" podCreationTimestamp="2025-12-08 17:42:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:44:42.879779169 +0000 UTC m=+139.781103273" watchObservedRunningTime="2025-12-08 17:44:42.882435653 +0000 UTC m=+139.783759747" Dec 08 17:44:43 crc kubenswrapper[5118]: E1208 17:44:43.106407 5118 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bf856c4ce39ea22a47c20ce3a893bb3e59b76de3a4a2519b97294ead8f9391d0" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 08 17:44:43 crc kubenswrapper[5118]: E1208 17:44:43.107815 5118 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bf856c4ce39ea22a47c20ce3a893bb3e59b76de3a4a2519b97294ead8f9391d0" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 08 17:44:43 crc kubenswrapper[5118]: E1208 17:44:43.108714 5118 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bf856c4ce39ea22a47c20ce3a893bb3e59b76de3a4a2519b97294ead8f9391d0" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 08 17:44:43 crc kubenswrapper[5118]: E1208 17:44:43.108797 5118 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-bdhnb" podUID="8ffbd09a-f2fa-4983-84c6-c6db6ccaac49" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Dec 08 17:44:43 crc kubenswrapper[5118]: I1208 17:44:43.784050 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-qrls7" event={"ID":"b81b63fd-c7d6-4446-ab93-c62912586002","Type":"ContainerStarted","Data":"6baaa0e6f8cb3b757f18d419fd68e217714dea0f859a3b27175dbc1d9e0060d8"} Dec 08 17:44:43 crc kubenswrapper[5118]: I1208 17:44:43.787242 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n5vp7" event={"ID":"8c05f773-74bd-433b-84ce-a7f5430d9b55","Type":"ContainerStarted","Data":"ee3b2515393d01c2713e846910cfc6a4defb5e6d13172cda7e0626c4830978c8"} Dec 08 17:44:43 crc kubenswrapper[5118]: I1208 17:44:43.791177 5118 generic.go:358] "Generic (PLEG): container finished" podID="e2c92d64-3525-4675-bbe9-38bfe6dd4504" containerID="88a18d41d676864b0bf1547cd9ab99433a44c623365c7c3e11a312a63eb01d5f" exitCode=0 Dec 08 17:44:43 crc kubenswrapper[5118]: I1208 17:44:43.791246 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zfv6j" event={"ID":"e2c92d64-3525-4675-bbe9-38bfe6dd4504","Type":"ContainerDied","Data":"88a18d41d676864b0bf1547cd9ab99433a44c623365c7c3e11a312a63eb01d5f"} Dec 08 17:44:43 crc kubenswrapper[5118]: I1208 17:44:43.797266 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" event={"ID":"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141","Type":"ContainerStarted","Data":"7680d0e47740dcf82605de03790387c6429d53cc2c2347e728307828086d7eb5"} Dec 08 17:44:43 crc kubenswrapper[5118]: I1208 17:44:43.801207 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6m6rs" event={"ID":"caab7ab2-a04e-42fc-bd64-76c76ee3755d","Type":"ContainerStarted","Data":"2f343988c85e306c38290b8bfe45b3a6e038b805f8210fc5565d48238b1cd45a"} Dec 08 17:44:43 crc kubenswrapper[5118]: I1208 17:44:43.803589 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" event={"ID":"f863fff9-286a-45fa-b8f0-8a86994b8440","Type":"ContainerStarted","Data":"228f0a4d57d53ea245c541336c17cd3ce68a38eea76e3f007711d9aae7c6aa27"} Dec 08 17:44:43 crc kubenswrapper[5118]: I1208 17:44:43.807101 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" event={"ID":"17b87002-b798-480a-8e17-83053d698239","Type":"ContainerStarted","Data":"0261c3b7867a61512b6efcd4c93c3a81b29ed769a3fb1d918ebb7c16038578e9"} Dec 08 17:44:44 crc kubenswrapper[5118]: I1208 17:44:44.235355 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-qrls7" podStartSLOduration=29.235332487 podStartE2EDuration="29.235332487s" podCreationTimestamp="2025-12-08 17:44:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:44:44.233942298 +0000 UTC m=+141.135266392" watchObservedRunningTime="2025-12-08 17:44:44.235332487 +0000 UTC m=+141.136656601" Dec 08 17:44:44 crc kubenswrapper[5118]: I1208 17:44:44.405425 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-n5vp7" podStartSLOduration=6.22351994 podStartE2EDuration="22.40541046s" podCreationTimestamp="2025-12-08 17:44:22 +0000 UTC" firstStartedPulling="2025-12-08 17:44:25.024698152 +0000 UTC m=+121.926022246" lastFinishedPulling="2025-12-08 17:44:41.206588662 +0000 UTC m=+138.107912766" observedRunningTime="2025-12-08 17:44:44.400204655 +0000 UTC m=+141.301528789" watchObservedRunningTime="2025-12-08 17:44:44.40541046 +0000 UTC m=+141.306734544" Dec 08 17:44:44 crc kubenswrapper[5118]: I1208 17:44:44.828705 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sb7gg" event={"ID":"e4f4fc3c-88d2-455a-a8d2-209388238c9a","Type":"ContainerStarted","Data":"435ddacff3a3c30df18535612a9b123318c3ba4753ce5d25a84fed17ec586d0a"} Dec 08 17:44:44 crc kubenswrapper[5118]: I1208 17:44:44.832079 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r22jf" event={"ID":"cb8303fe-2019-44f4-a124-af174b28cc02","Type":"ContainerStarted","Data":"19904e29b569326116dde0334fcdcfeccc18c7658bc563e099efdbf4a5c0fe55"} Dec 08 17:44:44 crc kubenswrapper[5118]: I1208 17:44:44.836764 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rvglb" event={"ID":"fe467668-8954-4465-87ca-ef1d5f933d43","Type":"ContainerStarted","Data":"784aacda702054432bdfdb7856bc84d016576f8ddcb91453c196cae71ca3641e"} Dec 08 17:44:45 crc kubenswrapper[5118]: I1208 17:44:45.434591 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 17:44:45 crc kubenswrapper[5118]: I1208 17:44:45.456617 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rvglb" podStartSLOduration=6.498187335 podStartE2EDuration="21.456599089s" podCreationTimestamp="2025-12-08 17:44:24 +0000 UTC" firstStartedPulling="2025-12-08 17:44:26.316561211 +0000 UTC m=+123.217885305" lastFinishedPulling="2025-12-08 17:44:41.274972925 +0000 UTC m=+138.176297059" observedRunningTime="2025-12-08 17:44:45.453867463 +0000 UTC m=+142.355191557" watchObservedRunningTime="2025-12-08 17:44:45.456599089 +0000 UTC m=+142.357923183" Dec 08 17:44:45 crc kubenswrapper[5118]: I1208 17:44:45.472347 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-6m6rs" podStartSLOduration=6.628810574 podStartE2EDuration="21.472328987s" podCreationTimestamp="2025-12-08 17:44:24 +0000 UTC" firstStartedPulling="2025-12-08 17:44:26.436944555 +0000 UTC m=+123.338268649" lastFinishedPulling="2025-12-08 17:44:41.280462978 +0000 UTC m=+138.181787062" observedRunningTime="2025-12-08 17:44:45.469514389 +0000 UTC m=+142.370838493" watchObservedRunningTime="2025-12-08 17:44:45.472328987 +0000 UTC m=+142.373653081" Dec 08 17:44:45 crc kubenswrapper[5118]: I1208 17:44:45.787199 5118 ???:1] "http: TLS handshake error from 192.168.126.11:50926: no serving certificate available for the kubelet" Dec 08 17:44:45 crc kubenswrapper[5118]: I1208 17:44:45.844459 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w7jrs" event={"ID":"ba520484-b334-4e08-8f1a-5eb554b62dc4","Type":"ContainerStarted","Data":"baabe0f04287772b04bb472e8d2d02239ab7bd90c654fc6e0ed56593df06f34c"} Dec 08 17:44:45 crc kubenswrapper[5118]: I1208 17:44:45.846193 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lxwl6" event={"ID":"fe8486ce-b0ff-43e5-b2a4-3e1d81feeebf","Type":"ContainerStarted","Data":"a178a9a0cec87f28ab4326f087a97ff391fb09d7bf9e2b5a008129f49bf869d3"} Dec 08 17:44:45 crc kubenswrapper[5118]: I1208 17:44:45.862983 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-sb7gg" podStartSLOduration=7.673647319 podStartE2EDuration="23.862963456s" podCreationTimestamp="2025-12-08 17:44:22 +0000 UTC" firstStartedPulling="2025-12-08 17:44:25.098923927 +0000 UTC m=+122.000248021" lastFinishedPulling="2025-12-08 17:44:41.288240064 +0000 UTC m=+138.189564158" observedRunningTime="2025-12-08 17:44:45.487403526 +0000 UTC m=+142.388727620" watchObservedRunningTime="2025-12-08 17:44:45.862963456 +0000 UTC m=+142.764287540" Dec 08 17:44:45 crc kubenswrapper[5118]: I1208 17:44:45.863351 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-w7jrs" podStartSLOduration=7.1369524030000004 podStartE2EDuration="20.863341656s" podCreationTimestamp="2025-12-08 17:44:25 +0000 UTC" firstStartedPulling="2025-12-08 17:44:27.556234675 +0000 UTC m=+124.457558769" lastFinishedPulling="2025-12-08 17:44:41.282623928 +0000 UTC m=+138.183948022" observedRunningTime="2025-12-08 17:44:45.86237089 +0000 UTC m=+142.763694984" watchObservedRunningTime="2025-12-08 17:44:45.863341656 +0000 UTC m=+142.764665750" Dec 08 17:44:45 crc kubenswrapper[5118]: I1208 17:44:45.900540 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-w7jrs" Dec 08 17:44:45 crc kubenswrapper[5118]: I1208 17:44:45.900589 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-w7jrs" Dec 08 17:44:46 crc kubenswrapper[5118]: I1208 17:44:46.857947 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zfv6j" event={"ID":"e2c92d64-3525-4675-bbe9-38bfe6dd4504","Type":"ContainerStarted","Data":"ef908685a3f65f3c967fd946e3c2b26bf81e80fe6f4bd7d21c7d02fcfaaf1408"} Dec 08 17:44:46 crc kubenswrapper[5118]: I1208 17:44:46.876298 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-zfv6j" podStartSLOduration=6.953908348 podStartE2EDuration="21.876283202s" podCreationTimestamp="2025-12-08 17:44:25 +0000 UTC" firstStartedPulling="2025-12-08 17:44:26.394343252 +0000 UTC m=+123.295667346" lastFinishedPulling="2025-12-08 17:44:41.316718106 +0000 UTC m=+138.218042200" observedRunningTime="2025-12-08 17:44:46.875000265 +0000 UTC m=+143.776324359" watchObservedRunningTime="2025-12-08 17:44:46.876283202 +0000 UTC m=+143.777607296" Dec 08 17:44:46 crc kubenswrapper[5118]: I1208 17:44:46.878287 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-r22jf" podStartSLOduration=9.756873779 podStartE2EDuration="25.878276636s" podCreationTimestamp="2025-12-08 17:44:21 +0000 UTC" firstStartedPulling="2025-12-08 17:44:25.153127326 +0000 UTC m=+122.054451420" lastFinishedPulling="2025-12-08 17:44:41.274530183 +0000 UTC m=+138.175854277" observedRunningTime="2025-12-08 17:44:45.884463624 +0000 UTC m=+142.785787718" watchObservedRunningTime="2025-12-08 17:44:46.878276636 +0000 UTC m=+143.779600730" Dec 08 17:44:46 crc kubenswrapper[5118]: I1208 17:44:46.894834 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-lxwl6" podStartSLOduration=8.836785624000001 podStartE2EDuration="24.894815857s" podCreationTimestamp="2025-12-08 17:44:22 +0000 UTC" firstStartedPulling="2025-12-08 17:44:25.225071748 +0000 UTC m=+122.126395842" lastFinishedPulling="2025-12-08 17:44:41.283101981 +0000 UTC m=+138.184426075" observedRunningTime="2025-12-08 17:44:46.892948375 +0000 UTC m=+143.794272499" watchObservedRunningTime="2025-12-08 17:44:46.894815857 +0000 UTC m=+143.796139961" Dec 08 17:44:46 crc kubenswrapper[5118]: I1208 17:44:46.966238 5118 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-w7jrs" podUID="ba520484-b334-4e08-8f1a-5eb554b62dc4" containerName="registry-server" probeResult="failure" output=< Dec 08 17:44:46 crc kubenswrapper[5118]: timeout: failed to connect service ":50051" within 1s Dec 08 17:44:46 crc kubenswrapper[5118]: > Dec 08 17:44:52 crc kubenswrapper[5118]: I1208 17:44:52.315349 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-r22jf" Dec 08 17:44:52 crc kubenswrapper[5118]: I1208 17:44:52.315698 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-r22jf" Dec 08 17:44:52 crc kubenswrapper[5118]: I1208 17:44:52.356174 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-r22jf" Dec 08 17:44:52 crc kubenswrapper[5118]: I1208 17:44:52.418695 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-lxwl6" Dec 08 17:44:52 crc kubenswrapper[5118]: I1208 17:44:52.418754 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-lxwl6" Dec 08 17:44:52 crc kubenswrapper[5118]: I1208 17:44:52.470159 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-lxwl6" Dec 08 17:44:52 crc kubenswrapper[5118]: I1208 17:44:52.649239 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-sb7gg" Dec 08 17:44:52 crc kubenswrapper[5118]: I1208 17:44:52.649298 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-sb7gg" Dec 08 17:44:52 crc kubenswrapper[5118]: I1208 17:44:52.708992 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-sb7gg" Dec 08 17:44:52 crc kubenswrapper[5118]: I1208 17:44:52.864168 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-n5vp7" Dec 08 17:44:52 crc kubenswrapper[5118]: I1208 17:44:52.864266 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-n5vp7" Dec 08 17:44:52 crc kubenswrapper[5118]: I1208 17:44:52.929133 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-n5vp7" Dec 08 17:44:52 crc kubenswrapper[5118]: I1208 17:44:52.943745 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-sb7gg" Dec 08 17:44:52 crc kubenswrapper[5118]: I1208 17:44:52.954043 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-r22jf" Dec 08 17:44:52 crc kubenswrapper[5118]: I1208 17:44:52.964757 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-lxwl6" Dec 08 17:44:53 crc kubenswrapper[5118]: I1208 17:44:53.008144 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-n5vp7" Dec 08 17:44:53 crc kubenswrapper[5118]: E1208 17:44:53.106226 5118 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bf856c4ce39ea22a47c20ce3a893bb3e59b76de3a4a2519b97294ead8f9391d0" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 08 17:44:53 crc kubenswrapper[5118]: E1208 17:44:53.108519 5118 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bf856c4ce39ea22a47c20ce3a893bb3e59b76de3a4a2519b97294ead8f9391d0" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 08 17:44:53 crc kubenswrapper[5118]: E1208 17:44:53.109915 5118 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bf856c4ce39ea22a47c20ce3a893bb3e59b76de3a4a2519b97294ead8f9391d0" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 08 17:44:53 crc kubenswrapper[5118]: E1208 17:44:53.109956 5118 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-bdhnb" podUID="8ffbd09a-f2fa-4983-84c6-c6db6ccaac49" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Dec 08 17:44:54 crc kubenswrapper[5118]: I1208 17:44:54.362814 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-rvglb" Dec 08 17:44:54 crc kubenswrapper[5118]: I1208 17:44:54.363015 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rvglb" Dec 08 17:44:54 crc kubenswrapper[5118]: I1208 17:44:54.428303 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rvglb" Dec 08 17:44:54 crc kubenswrapper[5118]: I1208 17:44:54.779213 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-n5vp7"] Dec 08 17:44:54 crc kubenswrapper[5118]: I1208 17:44:54.908844 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-n5vp7" podUID="8c05f773-74bd-433b-84ce-a7f5430d9b55" containerName="registry-server" containerID="cri-o://ee3b2515393d01c2713e846910cfc6a4defb5e6d13172cda7e0626c4830978c8" gracePeriod=2 Dec 08 17:44:54 crc kubenswrapper[5118]: I1208 17:44:54.979469 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rvglb" Dec 08 17:44:55 crc kubenswrapper[5118]: I1208 17:44:55.059514 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-6m6rs" Dec 08 17:44:55 crc kubenswrapper[5118]: I1208 17:44:55.060036 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-6m6rs" Dec 08 17:44:55 crc kubenswrapper[5118]: I1208 17:44:55.108567 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-6m6rs" Dec 08 17:44:55 crc kubenswrapper[5118]: I1208 17:44:55.375578 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-sb7gg"] Dec 08 17:44:55 crc kubenswrapper[5118]: I1208 17:44:55.376147 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-sb7gg" podUID="e4f4fc3c-88d2-455a-a8d2-209388238c9a" containerName="registry-server" containerID="cri-o://435ddacff3a3c30df18535612a9b123318c3ba4753ce5d25a84fed17ec586d0a" gracePeriod=2 Dec 08 17:44:55 crc kubenswrapper[5118]: I1208 17:44:55.493289 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zfv6j" Dec 08 17:44:55 crc kubenswrapper[5118]: I1208 17:44:55.493376 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-zfv6j" Dec 08 17:44:55 crc kubenswrapper[5118]: I1208 17:44:55.554050 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-zfv6j" Dec 08 17:44:55 crc kubenswrapper[5118]: I1208 17:44:55.993294 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-6m6rs" Dec 08 17:44:55 crc kubenswrapper[5118]: I1208 17:44:55.993406 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-zfv6j" Dec 08 17:44:56 crc kubenswrapper[5118]: I1208 17:44:56.009594 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-w7jrs" Dec 08 17:44:56 crc kubenswrapper[5118]: I1208 17:44:56.127707 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-w7jrs" Dec 08 17:44:57 crc kubenswrapper[5118]: I1208 17:44:57.174923 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6m6rs"] Dec 08 17:44:57 crc kubenswrapper[5118]: I1208 17:44:57.479171 5118 patch_prober.go:28] interesting pod/package-server-manager-77f986bd66-d8qsj container/package-server-manager namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"http://10.217.0.19:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 08 17:44:57 crc kubenswrapper[5118]: I1208 17:44:57.479621 5118 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-d8qsj" podUID="9148080a-77e2-4847-840a-d67f837c8fbe" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.19:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 08 17:44:57 crc kubenswrapper[5118]: I1208 17:44:57.924732 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-6m6rs" podUID="caab7ab2-a04e-42fc-bd64-76c76ee3755d" containerName="registry-server" containerID="cri-o://2f343988c85e306c38290b8bfe45b3a6e038b805f8210fc5565d48238b1cd45a" gracePeriod=2 Dec 08 17:44:58 crc kubenswrapper[5118]: I1208 17:44:58.400360 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-bdhnb_8ffbd09a-f2fa-4983-84c6-c6db6ccaac49/kube-multus-additional-cni-plugins/0.log" Dec 08 17:44:58 crc kubenswrapper[5118]: I1208 17:44:58.400704 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-bdhnb" Dec 08 17:44:58 crc kubenswrapper[5118]: I1208 17:44:58.537769 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/8ffbd09a-f2fa-4983-84c6-c6db6ccaac49-ready\") pod \"8ffbd09a-f2fa-4983-84c6-c6db6ccaac49\" (UID: \"8ffbd09a-f2fa-4983-84c6-c6db6ccaac49\") " Dec 08 17:44:58 crc kubenswrapper[5118]: I1208 17:44:58.537854 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/8ffbd09a-f2fa-4983-84c6-c6db6ccaac49-tuning-conf-dir\") pod \"8ffbd09a-f2fa-4983-84c6-c6db6ccaac49\" (UID: \"8ffbd09a-f2fa-4983-84c6-c6db6ccaac49\") " Dec 08 17:44:58 crc kubenswrapper[5118]: I1208 17:44:58.537911 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7kmh9\" (UniqueName: \"kubernetes.io/projected/8ffbd09a-f2fa-4983-84c6-c6db6ccaac49-kube-api-access-7kmh9\") pod \"8ffbd09a-f2fa-4983-84c6-c6db6ccaac49\" (UID: \"8ffbd09a-f2fa-4983-84c6-c6db6ccaac49\") " Dec 08 17:44:58 crc kubenswrapper[5118]: I1208 17:44:58.537978 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/8ffbd09a-f2fa-4983-84c6-c6db6ccaac49-cni-sysctl-allowlist\") pod \"8ffbd09a-f2fa-4983-84c6-c6db6ccaac49\" (UID: \"8ffbd09a-f2fa-4983-84c6-c6db6ccaac49\") " Dec 08 17:44:58 crc kubenswrapper[5118]: I1208 17:44:58.538671 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8ffbd09a-f2fa-4983-84c6-c6db6ccaac49-tuning-conf-dir" (OuterVolumeSpecName: "tuning-conf-dir") pod "8ffbd09a-f2fa-4983-84c6-c6db6ccaac49" (UID: "8ffbd09a-f2fa-4983-84c6-c6db6ccaac49"). InnerVolumeSpecName "tuning-conf-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 17:44:58 crc kubenswrapper[5118]: I1208 17:44:58.538990 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ffbd09a-f2fa-4983-84c6-c6db6ccaac49-ready" (OuterVolumeSpecName: "ready") pod "8ffbd09a-f2fa-4983-84c6-c6db6ccaac49" (UID: "8ffbd09a-f2fa-4983-84c6-c6db6ccaac49"). InnerVolumeSpecName "ready". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:44:58 crc kubenswrapper[5118]: I1208 17:44:58.539166 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ffbd09a-f2fa-4983-84c6-c6db6ccaac49-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "8ffbd09a-f2fa-4983-84c6-c6db6ccaac49" (UID: "8ffbd09a-f2fa-4983-84c6-c6db6ccaac49"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:44:58 crc kubenswrapper[5118]: I1208 17:44:58.547591 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ffbd09a-f2fa-4983-84c6-c6db6ccaac49-kube-api-access-7kmh9" (OuterVolumeSpecName: "kube-api-access-7kmh9") pod "8ffbd09a-f2fa-4983-84c6-c6db6ccaac49" (UID: "8ffbd09a-f2fa-4983-84c6-c6db6ccaac49"). InnerVolumeSpecName "kube-api-access-7kmh9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:44:58 crc kubenswrapper[5118]: I1208 17:44:58.639677 5118 reconciler_common.go:299] "Volume detached for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/8ffbd09a-f2fa-4983-84c6-c6db6ccaac49-ready\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:58 crc kubenswrapper[5118]: I1208 17:44:58.639734 5118 reconciler_common.go:299] "Volume detached for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/8ffbd09a-f2fa-4983-84c6-c6db6ccaac49-tuning-conf-dir\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:58 crc kubenswrapper[5118]: I1208 17:44:58.639745 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7kmh9\" (UniqueName: \"kubernetes.io/projected/8ffbd09a-f2fa-4983-84c6-c6db6ccaac49-kube-api-access-7kmh9\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:58 crc kubenswrapper[5118]: I1208 17:44:58.639753 5118 reconciler_common.go:299] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/8ffbd09a-f2fa-4983-84c6-c6db6ccaac49-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:58 crc kubenswrapper[5118]: I1208 17:44:58.862197 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-sb7gg_e4f4fc3c-88d2-455a-a8d2-209388238c9a/registry-server/0.log" Dec 08 17:44:58 crc kubenswrapper[5118]: I1208 17:44:58.864811 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sb7gg" Dec 08 17:44:58 crc kubenswrapper[5118]: I1208 17:44:58.922737 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6m6rs" Dec 08 17:44:58 crc kubenswrapper[5118]: I1208 17:44:58.930321 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-bdhnb_8ffbd09a-f2fa-4983-84c6-c6db6ccaac49/kube-multus-additional-cni-plugins/0.log" Dec 08 17:44:58 crc kubenswrapper[5118]: I1208 17:44:58.930410 5118 generic.go:358] "Generic (PLEG): container finished" podID="8ffbd09a-f2fa-4983-84c6-c6db6ccaac49" containerID="bf856c4ce39ea22a47c20ce3a893bb3e59b76de3a4a2519b97294ead8f9391d0" exitCode=137 Dec 08 17:44:58 crc kubenswrapper[5118]: I1208 17:44:58.930477 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-bdhnb" Dec 08 17:44:58 crc kubenswrapper[5118]: I1208 17:44:58.930528 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-bdhnb" event={"ID":"8ffbd09a-f2fa-4983-84c6-c6db6ccaac49","Type":"ContainerDied","Data":"bf856c4ce39ea22a47c20ce3a893bb3e59b76de3a4a2519b97294ead8f9391d0"} Dec 08 17:44:58 crc kubenswrapper[5118]: I1208 17:44:58.930562 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-bdhnb" event={"ID":"8ffbd09a-f2fa-4983-84c6-c6db6ccaac49","Type":"ContainerDied","Data":"1171a1787b5d4f4328c171567ca11f6fbfef3b0d18352bc9c205067f31f864e3"} Dec 08 17:44:58 crc kubenswrapper[5118]: I1208 17:44:58.930584 5118 scope.go:117] "RemoveContainer" containerID="bf856c4ce39ea22a47c20ce3a893bb3e59b76de3a4a2519b97294ead8f9391d0" Dec 08 17:44:58 crc kubenswrapper[5118]: I1208 17:44:58.933391 5118 generic.go:358] "Generic (PLEG): container finished" podID="8c05f773-74bd-433b-84ce-a7f5430d9b55" containerID="ee3b2515393d01c2713e846910cfc6a4defb5e6d13172cda7e0626c4830978c8" exitCode=0 Dec 08 17:44:58 crc kubenswrapper[5118]: I1208 17:44:58.933475 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n5vp7" event={"ID":"8c05f773-74bd-433b-84ce-a7f5430d9b55","Type":"ContainerDied","Data":"ee3b2515393d01c2713e846910cfc6a4defb5e6d13172cda7e0626c4830978c8"} Dec 08 17:44:58 crc kubenswrapper[5118]: I1208 17:44:58.937726 5118 generic.go:358] "Generic (PLEG): container finished" podID="caab7ab2-a04e-42fc-bd64-76c76ee3755d" containerID="2f343988c85e306c38290b8bfe45b3a6e038b805f8210fc5565d48238b1cd45a" exitCode=0 Dec 08 17:44:58 crc kubenswrapper[5118]: I1208 17:44:58.937862 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6m6rs" Dec 08 17:44:58 crc kubenswrapper[5118]: I1208 17:44:58.937852 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6m6rs" event={"ID":"caab7ab2-a04e-42fc-bd64-76c76ee3755d","Type":"ContainerDied","Data":"2f343988c85e306c38290b8bfe45b3a6e038b805f8210fc5565d48238b1cd45a"} Dec 08 17:44:58 crc kubenswrapper[5118]: I1208 17:44:58.937930 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6m6rs" event={"ID":"caab7ab2-a04e-42fc-bd64-76c76ee3755d","Type":"ContainerDied","Data":"bebe1f0da9278f62d8caef7874fc35428010d09942af1719c37fac3e6c4e8b5b"} Dec 08 17:44:58 crc kubenswrapper[5118]: I1208 17:44:58.940450 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-sb7gg_e4f4fc3c-88d2-455a-a8d2-209388238c9a/registry-server/0.log" Dec 08 17:44:58 crc kubenswrapper[5118]: I1208 17:44:58.942844 5118 generic.go:358] "Generic (PLEG): container finished" podID="e4f4fc3c-88d2-455a-a8d2-209388238c9a" containerID="435ddacff3a3c30df18535612a9b123318c3ba4753ce5d25a84fed17ec586d0a" exitCode=137 Dec 08 17:44:58 crc kubenswrapper[5118]: I1208 17:44:58.942971 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sb7gg" event={"ID":"e4f4fc3c-88d2-455a-a8d2-209388238c9a","Type":"ContainerDied","Data":"435ddacff3a3c30df18535612a9b123318c3ba4753ce5d25a84fed17ec586d0a"} Dec 08 17:44:58 crc kubenswrapper[5118]: I1208 17:44:58.942997 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sb7gg" event={"ID":"e4f4fc3c-88d2-455a-a8d2-209388238c9a","Type":"ContainerDied","Data":"ef0290741bfadc050351726657ea5d1e90d89bb42d86f01f2b2081dd32e004ea"} Dec 08 17:44:58 crc kubenswrapper[5118]: I1208 17:44:58.943079 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sb7gg" Dec 08 17:44:58 crc kubenswrapper[5118]: I1208 17:44:58.960584 5118 scope.go:117] "RemoveContainer" containerID="bf856c4ce39ea22a47c20ce3a893bb3e59b76de3a4a2519b97294ead8f9391d0" Dec 08 17:44:58 crc kubenswrapper[5118]: I1208 17:44:58.962614 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-bdhnb"] Dec 08 17:44:58 crc kubenswrapper[5118]: E1208 17:44:58.963110 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf856c4ce39ea22a47c20ce3a893bb3e59b76de3a4a2519b97294ead8f9391d0\": container with ID starting with bf856c4ce39ea22a47c20ce3a893bb3e59b76de3a4a2519b97294ead8f9391d0 not found: ID does not exist" containerID="bf856c4ce39ea22a47c20ce3a893bb3e59b76de3a4a2519b97294ead8f9391d0" Dec 08 17:44:58 crc kubenswrapper[5118]: I1208 17:44:58.963167 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf856c4ce39ea22a47c20ce3a893bb3e59b76de3a4a2519b97294ead8f9391d0"} err="failed to get container status \"bf856c4ce39ea22a47c20ce3a893bb3e59b76de3a4a2519b97294ead8f9391d0\": rpc error: code = NotFound desc = could not find container \"bf856c4ce39ea22a47c20ce3a893bb3e59b76de3a4a2519b97294ead8f9391d0\": container with ID starting with bf856c4ce39ea22a47c20ce3a893bb3e59b76de3a4a2519b97294ead8f9391d0 not found: ID does not exist" Dec 08 17:44:58 crc kubenswrapper[5118]: I1208 17:44:58.963212 5118 scope.go:117] "RemoveContainer" containerID="2f343988c85e306c38290b8bfe45b3a6e038b805f8210fc5565d48238b1cd45a" Dec 08 17:44:58 crc kubenswrapper[5118]: I1208 17:44:58.965294 5118 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-bdhnb"] Dec 08 17:44:58 crc kubenswrapper[5118]: I1208 17:44:58.981226 5118 scope.go:117] "RemoveContainer" containerID="dcceb3a7d3ba1066136e08a7add058961a3fb9c31b40f05bdd2ce3a8b5cb777c" Dec 08 17:44:59 crc kubenswrapper[5118]: I1208 17:44:59.003047 5118 scope.go:117] "RemoveContainer" containerID="1c66b13047e75f1bad78004632e6bc7e6cb083e8b4f6fd5fe2c2e871b4d6d8f3" Dec 08 17:44:59 crc kubenswrapper[5118]: I1208 17:44:59.021837 5118 scope.go:117] "RemoveContainer" containerID="2f343988c85e306c38290b8bfe45b3a6e038b805f8210fc5565d48238b1cd45a" Dec 08 17:44:59 crc kubenswrapper[5118]: E1208 17:44:59.022317 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f343988c85e306c38290b8bfe45b3a6e038b805f8210fc5565d48238b1cd45a\": container with ID starting with 2f343988c85e306c38290b8bfe45b3a6e038b805f8210fc5565d48238b1cd45a not found: ID does not exist" containerID="2f343988c85e306c38290b8bfe45b3a6e038b805f8210fc5565d48238b1cd45a" Dec 08 17:44:59 crc kubenswrapper[5118]: I1208 17:44:59.022348 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f343988c85e306c38290b8bfe45b3a6e038b805f8210fc5565d48238b1cd45a"} err="failed to get container status \"2f343988c85e306c38290b8bfe45b3a6e038b805f8210fc5565d48238b1cd45a\": rpc error: code = NotFound desc = could not find container \"2f343988c85e306c38290b8bfe45b3a6e038b805f8210fc5565d48238b1cd45a\": container with ID starting with 2f343988c85e306c38290b8bfe45b3a6e038b805f8210fc5565d48238b1cd45a not found: ID does not exist" Dec 08 17:44:59 crc kubenswrapper[5118]: I1208 17:44:59.022370 5118 scope.go:117] "RemoveContainer" containerID="dcceb3a7d3ba1066136e08a7add058961a3fb9c31b40f05bdd2ce3a8b5cb777c" Dec 08 17:44:59 crc kubenswrapper[5118]: E1208 17:44:59.022768 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dcceb3a7d3ba1066136e08a7add058961a3fb9c31b40f05bdd2ce3a8b5cb777c\": container with ID starting with dcceb3a7d3ba1066136e08a7add058961a3fb9c31b40f05bdd2ce3a8b5cb777c not found: ID does not exist" containerID="dcceb3a7d3ba1066136e08a7add058961a3fb9c31b40f05bdd2ce3a8b5cb777c" Dec 08 17:44:59 crc kubenswrapper[5118]: I1208 17:44:59.022817 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dcceb3a7d3ba1066136e08a7add058961a3fb9c31b40f05bdd2ce3a8b5cb777c"} err="failed to get container status \"dcceb3a7d3ba1066136e08a7add058961a3fb9c31b40f05bdd2ce3a8b5cb777c\": rpc error: code = NotFound desc = could not find container \"dcceb3a7d3ba1066136e08a7add058961a3fb9c31b40f05bdd2ce3a8b5cb777c\": container with ID starting with dcceb3a7d3ba1066136e08a7add058961a3fb9c31b40f05bdd2ce3a8b5cb777c not found: ID does not exist" Dec 08 17:44:59 crc kubenswrapper[5118]: I1208 17:44:59.022849 5118 scope.go:117] "RemoveContainer" containerID="1c66b13047e75f1bad78004632e6bc7e6cb083e8b4f6fd5fe2c2e871b4d6d8f3" Dec 08 17:44:59 crc kubenswrapper[5118]: E1208 17:44:59.023135 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c66b13047e75f1bad78004632e6bc7e6cb083e8b4f6fd5fe2c2e871b4d6d8f3\": container with ID starting with 1c66b13047e75f1bad78004632e6bc7e6cb083e8b4f6fd5fe2c2e871b4d6d8f3 not found: ID does not exist" containerID="1c66b13047e75f1bad78004632e6bc7e6cb083e8b4f6fd5fe2c2e871b4d6d8f3" Dec 08 17:44:59 crc kubenswrapper[5118]: I1208 17:44:59.023165 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c66b13047e75f1bad78004632e6bc7e6cb083e8b4f6fd5fe2c2e871b4d6d8f3"} err="failed to get container status \"1c66b13047e75f1bad78004632e6bc7e6cb083e8b4f6fd5fe2c2e871b4d6d8f3\": rpc error: code = NotFound desc = could not find container \"1c66b13047e75f1bad78004632e6bc7e6cb083e8b4f6fd5fe2c2e871b4d6d8f3\": container with ID starting with 1c66b13047e75f1bad78004632e6bc7e6cb083e8b4f6fd5fe2c2e871b4d6d8f3 not found: ID does not exist" Dec 08 17:44:59 crc kubenswrapper[5118]: I1208 17:44:59.023180 5118 scope.go:117] "RemoveContainer" containerID="435ddacff3a3c30df18535612a9b123318c3ba4753ce5d25a84fed17ec586d0a" Dec 08 17:44:59 crc kubenswrapper[5118]: I1208 17:44:59.037130 5118 scope.go:117] "RemoveContainer" containerID="9a2ab17843bd36c31f1adbe8492fc684a0aac918d650d0ea4e1be40b9884d37b" Dec 08 17:44:59 crc kubenswrapper[5118]: I1208 17:44:59.043088 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mj9vb\" (UniqueName: \"kubernetes.io/projected/caab7ab2-a04e-42fc-bd64-76c76ee3755d-kube-api-access-mj9vb\") pod \"caab7ab2-a04e-42fc-bd64-76c76ee3755d\" (UID: \"caab7ab2-a04e-42fc-bd64-76c76ee3755d\") " Dec 08 17:44:59 crc kubenswrapper[5118]: I1208 17:44:59.043148 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hb2h7\" (UniqueName: \"kubernetes.io/projected/e4f4fc3c-88d2-455a-a8d2-209388238c9a-kube-api-access-hb2h7\") pod \"e4f4fc3c-88d2-455a-a8d2-209388238c9a\" (UID: \"e4f4fc3c-88d2-455a-a8d2-209388238c9a\") " Dec 08 17:44:59 crc kubenswrapper[5118]: I1208 17:44:59.043217 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4f4fc3c-88d2-455a-a8d2-209388238c9a-utilities\") pod \"e4f4fc3c-88d2-455a-a8d2-209388238c9a\" (UID: \"e4f4fc3c-88d2-455a-a8d2-209388238c9a\") " Dec 08 17:44:59 crc kubenswrapper[5118]: I1208 17:44:59.043303 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/caab7ab2-a04e-42fc-bd64-76c76ee3755d-catalog-content\") pod \"caab7ab2-a04e-42fc-bd64-76c76ee3755d\" (UID: \"caab7ab2-a04e-42fc-bd64-76c76ee3755d\") " Dec 08 17:44:59 crc kubenswrapper[5118]: I1208 17:44:59.043350 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4f4fc3c-88d2-455a-a8d2-209388238c9a-catalog-content\") pod \"e4f4fc3c-88d2-455a-a8d2-209388238c9a\" (UID: \"e4f4fc3c-88d2-455a-a8d2-209388238c9a\") " Dec 08 17:44:59 crc kubenswrapper[5118]: I1208 17:44:59.043446 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/caab7ab2-a04e-42fc-bd64-76c76ee3755d-utilities\") pod \"caab7ab2-a04e-42fc-bd64-76c76ee3755d\" (UID: \"caab7ab2-a04e-42fc-bd64-76c76ee3755d\") " Dec 08 17:44:59 crc kubenswrapper[5118]: I1208 17:44:59.044546 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/caab7ab2-a04e-42fc-bd64-76c76ee3755d-utilities" (OuterVolumeSpecName: "utilities") pod "caab7ab2-a04e-42fc-bd64-76c76ee3755d" (UID: "caab7ab2-a04e-42fc-bd64-76c76ee3755d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:44:59 crc kubenswrapper[5118]: I1208 17:44:59.044979 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e4f4fc3c-88d2-455a-a8d2-209388238c9a-utilities" (OuterVolumeSpecName: "utilities") pod "e4f4fc3c-88d2-455a-a8d2-209388238c9a" (UID: "e4f4fc3c-88d2-455a-a8d2-209388238c9a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:44:59 crc kubenswrapper[5118]: I1208 17:44:59.047682 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4f4fc3c-88d2-455a-a8d2-209388238c9a-kube-api-access-hb2h7" (OuterVolumeSpecName: "kube-api-access-hb2h7") pod "e4f4fc3c-88d2-455a-a8d2-209388238c9a" (UID: "e4f4fc3c-88d2-455a-a8d2-209388238c9a"). InnerVolumeSpecName "kube-api-access-hb2h7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:44:59 crc kubenswrapper[5118]: I1208 17:44:59.047726 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/caab7ab2-a04e-42fc-bd64-76c76ee3755d-kube-api-access-mj9vb" (OuterVolumeSpecName: "kube-api-access-mj9vb") pod "caab7ab2-a04e-42fc-bd64-76c76ee3755d" (UID: "caab7ab2-a04e-42fc-bd64-76c76ee3755d"). InnerVolumeSpecName "kube-api-access-mj9vb". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:44:59 crc kubenswrapper[5118]: I1208 17:44:59.055133 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/caab7ab2-a04e-42fc-bd64-76c76ee3755d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "caab7ab2-a04e-42fc-bd64-76c76ee3755d" (UID: "caab7ab2-a04e-42fc-bd64-76c76ee3755d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:44:59 crc kubenswrapper[5118]: I1208 17:44:59.056354 5118 scope.go:117] "RemoveContainer" containerID="4653db880fc6a5b47cdaa0ba5f369ef1c8ac02b34151805d4aed8008b3fa79d2" Dec 08 17:44:59 crc kubenswrapper[5118]: I1208 17:44:59.093300 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e4f4fc3c-88d2-455a-a8d2-209388238c9a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e4f4fc3c-88d2-455a-a8d2-209388238c9a" (UID: "e4f4fc3c-88d2-455a-a8d2-209388238c9a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:44:59 crc kubenswrapper[5118]: I1208 17:44:59.112119 5118 scope.go:117] "RemoveContainer" containerID="435ddacff3a3c30df18535612a9b123318c3ba4753ce5d25a84fed17ec586d0a" Dec 08 17:44:59 crc kubenswrapper[5118]: E1208 17:44:59.116835 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"435ddacff3a3c30df18535612a9b123318c3ba4753ce5d25a84fed17ec586d0a\": container with ID starting with 435ddacff3a3c30df18535612a9b123318c3ba4753ce5d25a84fed17ec586d0a not found: ID does not exist" containerID="435ddacff3a3c30df18535612a9b123318c3ba4753ce5d25a84fed17ec586d0a" Dec 08 17:44:59 crc kubenswrapper[5118]: I1208 17:44:59.116915 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"435ddacff3a3c30df18535612a9b123318c3ba4753ce5d25a84fed17ec586d0a"} err="failed to get container status \"435ddacff3a3c30df18535612a9b123318c3ba4753ce5d25a84fed17ec586d0a\": rpc error: code = NotFound desc = could not find container \"435ddacff3a3c30df18535612a9b123318c3ba4753ce5d25a84fed17ec586d0a\": container with ID starting with 435ddacff3a3c30df18535612a9b123318c3ba4753ce5d25a84fed17ec586d0a not found: ID does not exist" Dec 08 17:44:59 crc kubenswrapper[5118]: I1208 17:44:59.116953 5118 scope.go:117] "RemoveContainer" containerID="9a2ab17843bd36c31f1adbe8492fc684a0aac918d650d0ea4e1be40b9884d37b" Dec 08 17:44:59 crc kubenswrapper[5118]: E1208 17:44:59.117319 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a2ab17843bd36c31f1adbe8492fc684a0aac918d650d0ea4e1be40b9884d37b\": container with ID starting with 9a2ab17843bd36c31f1adbe8492fc684a0aac918d650d0ea4e1be40b9884d37b not found: ID does not exist" containerID="9a2ab17843bd36c31f1adbe8492fc684a0aac918d650d0ea4e1be40b9884d37b" Dec 08 17:44:59 crc kubenswrapper[5118]: I1208 17:44:59.117342 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a2ab17843bd36c31f1adbe8492fc684a0aac918d650d0ea4e1be40b9884d37b"} err="failed to get container status \"9a2ab17843bd36c31f1adbe8492fc684a0aac918d650d0ea4e1be40b9884d37b\": rpc error: code = NotFound desc = could not find container \"9a2ab17843bd36c31f1adbe8492fc684a0aac918d650d0ea4e1be40b9884d37b\": container with ID starting with 9a2ab17843bd36c31f1adbe8492fc684a0aac918d650d0ea4e1be40b9884d37b not found: ID does not exist" Dec 08 17:44:59 crc kubenswrapper[5118]: I1208 17:44:59.117358 5118 scope.go:117] "RemoveContainer" containerID="4653db880fc6a5b47cdaa0ba5f369ef1c8ac02b34151805d4aed8008b3fa79d2" Dec 08 17:44:59 crc kubenswrapper[5118]: E1208 17:44:59.117592 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4653db880fc6a5b47cdaa0ba5f369ef1c8ac02b34151805d4aed8008b3fa79d2\": container with ID starting with 4653db880fc6a5b47cdaa0ba5f369ef1c8ac02b34151805d4aed8008b3fa79d2 not found: ID does not exist" containerID="4653db880fc6a5b47cdaa0ba5f369ef1c8ac02b34151805d4aed8008b3fa79d2" Dec 08 17:44:59 crc kubenswrapper[5118]: I1208 17:44:59.117617 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4653db880fc6a5b47cdaa0ba5f369ef1c8ac02b34151805d4aed8008b3fa79d2"} err="failed to get container status \"4653db880fc6a5b47cdaa0ba5f369ef1c8ac02b34151805d4aed8008b3fa79d2\": rpc error: code = NotFound desc = could not find container \"4653db880fc6a5b47cdaa0ba5f369ef1c8ac02b34151805d4aed8008b3fa79d2\": container with ID starting with 4653db880fc6a5b47cdaa0ba5f369ef1c8ac02b34151805d4aed8008b3fa79d2 not found: ID does not exist" Dec 08 17:44:59 crc kubenswrapper[5118]: I1208 17:44:59.123354 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-n5vp7" Dec 08 17:44:59 crc kubenswrapper[5118]: I1208 17:44:59.144291 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mj9vb\" (UniqueName: \"kubernetes.io/projected/caab7ab2-a04e-42fc-bd64-76c76ee3755d-kube-api-access-mj9vb\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:59 crc kubenswrapper[5118]: I1208 17:44:59.144323 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hb2h7\" (UniqueName: \"kubernetes.io/projected/e4f4fc3c-88d2-455a-a8d2-209388238c9a-kube-api-access-hb2h7\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:59 crc kubenswrapper[5118]: I1208 17:44:59.144333 5118 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4f4fc3c-88d2-455a-a8d2-209388238c9a-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:59 crc kubenswrapper[5118]: I1208 17:44:59.144345 5118 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/caab7ab2-a04e-42fc-bd64-76c76ee3755d-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:59 crc kubenswrapper[5118]: I1208 17:44:59.144354 5118 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4f4fc3c-88d2-455a-a8d2-209388238c9a-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:59 crc kubenswrapper[5118]: I1208 17:44:59.144364 5118 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/caab7ab2-a04e-42fc-bd64-76c76ee3755d-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:59 crc kubenswrapper[5118]: I1208 17:44:59.245142 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c05f773-74bd-433b-84ce-a7f5430d9b55-catalog-content\") pod \"8c05f773-74bd-433b-84ce-a7f5430d9b55\" (UID: \"8c05f773-74bd-433b-84ce-a7f5430d9b55\") " Dec 08 17:44:59 crc kubenswrapper[5118]: I1208 17:44:59.245325 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c05f773-74bd-433b-84ce-a7f5430d9b55-utilities\") pod \"8c05f773-74bd-433b-84ce-a7f5430d9b55\" (UID: \"8c05f773-74bd-433b-84ce-a7f5430d9b55\") " Dec 08 17:44:59 crc kubenswrapper[5118]: I1208 17:44:59.245444 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zj2mc\" (UniqueName: \"kubernetes.io/projected/8c05f773-74bd-433b-84ce-a7f5430d9b55-kube-api-access-zj2mc\") pod \"8c05f773-74bd-433b-84ce-a7f5430d9b55\" (UID: \"8c05f773-74bd-433b-84ce-a7f5430d9b55\") " Dec 08 17:44:59 crc kubenswrapper[5118]: I1208 17:44:59.246414 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c05f773-74bd-433b-84ce-a7f5430d9b55-utilities" (OuterVolumeSpecName: "utilities") pod "8c05f773-74bd-433b-84ce-a7f5430d9b55" (UID: "8c05f773-74bd-433b-84ce-a7f5430d9b55"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:44:59 crc kubenswrapper[5118]: I1208 17:44:59.261821 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c05f773-74bd-433b-84ce-a7f5430d9b55-kube-api-access-zj2mc" (OuterVolumeSpecName: "kube-api-access-zj2mc") pod "8c05f773-74bd-433b-84ce-a7f5430d9b55" (UID: "8c05f773-74bd-433b-84ce-a7f5430d9b55"). InnerVolumeSpecName "kube-api-access-zj2mc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:44:59 crc kubenswrapper[5118]: I1208 17:44:59.284745 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-sb7gg"] Dec 08 17:44:59 crc kubenswrapper[5118]: I1208 17:44:59.287230 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c05f773-74bd-433b-84ce-a7f5430d9b55-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8c05f773-74bd-433b-84ce-a7f5430d9b55" (UID: "8c05f773-74bd-433b-84ce-a7f5430d9b55"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:44:59 crc kubenswrapper[5118]: I1208 17:44:59.291264 5118 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-sb7gg"] Dec 08 17:44:59 crc kubenswrapper[5118]: I1208 17:44:59.297569 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6m6rs"] Dec 08 17:44:59 crc kubenswrapper[5118]: I1208 17:44:59.300054 5118 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-6m6rs"] Dec 08 17:44:59 crc kubenswrapper[5118]: I1208 17:44:59.346607 5118 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c05f773-74bd-433b-84ce-a7f5430d9b55-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:59 crc kubenswrapper[5118]: I1208 17:44:59.346893 5118 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c05f773-74bd-433b-84ce-a7f5430d9b55-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:59 crc kubenswrapper[5118]: I1208 17:44:59.346967 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zj2mc\" (UniqueName: \"kubernetes.io/projected/8c05f773-74bd-433b-84ce-a7f5430d9b55-kube-api-access-zj2mc\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:59 crc kubenswrapper[5118]: I1208 17:44:59.433320 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ffbd09a-f2fa-4983-84c6-c6db6ccaac49" path="/var/lib/kubelet/pods/8ffbd09a-f2fa-4983-84c6-c6db6ccaac49/volumes" Dec 08 17:44:59 crc kubenswrapper[5118]: I1208 17:44:59.433776 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="caab7ab2-a04e-42fc-bd64-76c76ee3755d" path="/var/lib/kubelet/pods/caab7ab2-a04e-42fc-bd64-76c76ee3755d/volumes" Dec 08 17:44:59 crc kubenswrapper[5118]: I1208 17:44:59.434408 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4f4fc3c-88d2-455a-a8d2-209388238c9a" path="/var/lib/kubelet/pods/e4f4fc3c-88d2-455a-a8d2-209388238c9a/volumes" Dec 08 17:44:59 crc kubenswrapper[5118]: I1208 17:44:59.950193 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n5vp7" event={"ID":"8c05f773-74bd-433b-84ce-a7f5430d9b55","Type":"ContainerDied","Data":"ae661eb10cf40ee037c5bbb75003eb4cc6748efa6c166b48128d05415dc58f58"} Dec 08 17:44:59 crc kubenswrapper[5118]: I1208 17:44:59.950241 5118 scope.go:117] "RemoveContainer" containerID="ee3b2515393d01c2713e846910cfc6a4defb5e6d13172cda7e0626c4830978c8" Dec 08 17:44:59 crc kubenswrapper[5118]: I1208 17:44:59.950371 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-n5vp7" Dec 08 17:44:59 crc kubenswrapper[5118]: I1208 17:44:59.968347 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-n5vp7"] Dec 08 17:44:59 crc kubenswrapper[5118]: I1208 17:44:59.969713 5118 scope.go:117] "RemoveContainer" containerID="ef1557cb2a0d48009c41378fbfc2894fe108a20487cc6caf50911e04ec94ccbc" Dec 08 17:44:59 crc kubenswrapper[5118]: I1208 17:44:59.972822 5118 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-n5vp7"] Dec 08 17:44:59 crc kubenswrapper[5118]: I1208 17:44:59.984202 5118 scope.go:117] "RemoveContainer" containerID="9f8a267f29d857e5a98610714fb9c1148b18b501a31c792893697428ba40718c" Dec 08 17:45:00 crc kubenswrapper[5118]: I1208 17:45:00.129010 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29420265-vsxwc"] Dec 08 17:45:00 crc kubenswrapper[5118]: I1208 17:45:00.129793 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="caab7ab2-a04e-42fc-bd64-76c76ee3755d" containerName="registry-server" Dec 08 17:45:00 crc kubenswrapper[5118]: I1208 17:45:00.129906 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="caab7ab2-a04e-42fc-bd64-76c76ee3755d" containerName="registry-server" Dec 08 17:45:00 crc kubenswrapper[5118]: I1208 17:45:00.129969 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8c05f773-74bd-433b-84ce-a7f5430d9b55" containerName="extract-utilities" Dec 08 17:45:00 crc kubenswrapper[5118]: I1208 17:45:00.130057 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c05f773-74bd-433b-84ce-a7f5430d9b55" containerName="extract-utilities" Dec 08 17:45:00 crc kubenswrapper[5118]: I1208 17:45:00.130117 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e4f4fc3c-88d2-455a-a8d2-209388238c9a" containerName="extract-content" Dec 08 17:45:00 crc kubenswrapper[5118]: I1208 17:45:00.130171 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4f4fc3c-88d2-455a-a8d2-209388238c9a" containerName="extract-content" Dec 08 17:45:00 crc kubenswrapper[5118]: I1208 17:45:00.130227 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8c05f773-74bd-433b-84ce-a7f5430d9b55" containerName="registry-server" Dec 08 17:45:00 crc kubenswrapper[5118]: I1208 17:45:00.130286 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c05f773-74bd-433b-84ce-a7f5430d9b55" containerName="registry-server" Dec 08 17:45:00 crc kubenswrapper[5118]: I1208 17:45:00.130338 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e4f4fc3c-88d2-455a-a8d2-209388238c9a" containerName="extract-utilities" Dec 08 17:45:00 crc kubenswrapper[5118]: I1208 17:45:00.130390 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4f4fc3c-88d2-455a-a8d2-209388238c9a" containerName="extract-utilities" Dec 08 17:45:00 crc kubenswrapper[5118]: I1208 17:45:00.130441 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="46f67036-aba9-49da-a298-d68e56b91e00" containerName="pruner" Dec 08 17:45:00 crc kubenswrapper[5118]: I1208 17:45:00.130492 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="46f67036-aba9-49da-a298-d68e56b91e00" containerName="pruner" Dec 08 17:45:00 crc kubenswrapper[5118]: I1208 17:45:00.130546 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="caab7ab2-a04e-42fc-bd64-76c76ee3755d" containerName="extract-utilities" Dec 08 17:45:00 crc kubenswrapper[5118]: I1208 17:45:00.130594 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="caab7ab2-a04e-42fc-bd64-76c76ee3755d" containerName="extract-utilities" Dec 08 17:45:00 crc kubenswrapper[5118]: I1208 17:45:00.130651 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8c05f773-74bd-433b-84ce-a7f5430d9b55" containerName="extract-content" Dec 08 17:45:00 crc kubenswrapper[5118]: I1208 17:45:00.130705 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c05f773-74bd-433b-84ce-a7f5430d9b55" containerName="extract-content" Dec 08 17:45:00 crc kubenswrapper[5118]: I1208 17:45:00.131023 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c683e0b8-bb8e-4012-80e0-a07cbd5b9cf6" containerName="pruner" Dec 08 17:45:00 crc kubenswrapper[5118]: I1208 17:45:00.131270 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="c683e0b8-bb8e-4012-80e0-a07cbd5b9cf6" containerName="pruner" Dec 08 17:45:00 crc kubenswrapper[5118]: I1208 17:45:00.131387 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="caab7ab2-a04e-42fc-bd64-76c76ee3755d" containerName="extract-content" Dec 08 17:45:00 crc kubenswrapper[5118]: I1208 17:45:00.131465 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="caab7ab2-a04e-42fc-bd64-76c76ee3755d" containerName="extract-content" Dec 08 17:45:00 crc kubenswrapper[5118]: I1208 17:45:00.131547 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e4f4fc3c-88d2-455a-a8d2-209388238c9a" containerName="registry-server" Dec 08 17:45:00 crc kubenswrapper[5118]: I1208 17:45:00.131620 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4f4fc3c-88d2-455a-a8d2-209388238c9a" containerName="registry-server" Dec 08 17:45:00 crc kubenswrapper[5118]: I1208 17:45:00.131694 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8ffbd09a-f2fa-4983-84c6-c6db6ccaac49" containerName="kube-multus-additional-cni-plugins" Dec 08 17:45:00 crc kubenswrapper[5118]: I1208 17:45:00.131746 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ffbd09a-f2fa-4983-84c6-c6db6ccaac49" containerName="kube-multus-additional-cni-plugins" Dec 08 17:45:00 crc kubenswrapper[5118]: I1208 17:45:00.131944 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="8ffbd09a-f2fa-4983-84c6-c6db6ccaac49" containerName="kube-multus-additional-cni-plugins" Dec 08 17:45:00 crc kubenswrapper[5118]: I1208 17:45:00.132028 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="e4f4fc3c-88d2-455a-a8d2-209388238c9a" containerName="registry-server" Dec 08 17:45:00 crc kubenswrapper[5118]: I1208 17:45:00.132088 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="caab7ab2-a04e-42fc-bd64-76c76ee3755d" containerName="registry-server" Dec 08 17:45:00 crc kubenswrapper[5118]: I1208 17:45:00.132143 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="8c05f773-74bd-433b-84ce-a7f5430d9b55" containerName="registry-server" Dec 08 17:45:00 crc kubenswrapper[5118]: I1208 17:45:00.132199 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="46f67036-aba9-49da-a298-d68e56b91e00" containerName="pruner" Dec 08 17:45:00 crc kubenswrapper[5118]: I1208 17:45:00.132255 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="c683e0b8-bb8e-4012-80e0-a07cbd5b9cf6" containerName="pruner" Dec 08 17:45:00 crc kubenswrapper[5118]: I1208 17:45:00.149617 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29420265-vsxwc"] Dec 08 17:45:00 crc kubenswrapper[5118]: I1208 17:45:00.150160 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29420265-vsxwc" Dec 08 17:45:00 crc kubenswrapper[5118]: I1208 17:45:00.152375 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Dec 08 17:45:00 crc kubenswrapper[5118]: I1208 17:45:00.152749 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Dec 08 17:45:00 crc kubenswrapper[5118]: I1208 17:45:00.172906 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-w7jrs"] Dec 08 17:45:00 crc kubenswrapper[5118]: I1208 17:45:00.173390 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-w7jrs" podUID="ba520484-b334-4e08-8f1a-5eb554b62dc4" containerName="registry-server" containerID="cri-o://baabe0f04287772b04bb472e8d2d02239ab7bd90c654fc6e0ed56593df06f34c" gracePeriod=2 Dec 08 17:45:00 crc kubenswrapper[5118]: I1208 17:45:00.257526 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjh6d\" (UniqueName: \"kubernetes.io/projected/3ec0e45e-87cc-4b67-b137-ac7179bf7d74-kube-api-access-bjh6d\") pod \"collect-profiles-29420265-vsxwc\" (UID: \"3ec0e45e-87cc-4b67-b137-ac7179bf7d74\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420265-vsxwc" Dec 08 17:45:00 crc kubenswrapper[5118]: I1208 17:45:00.257905 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3ec0e45e-87cc-4b67-b137-ac7179bf7d74-config-volume\") pod \"collect-profiles-29420265-vsxwc\" (UID: \"3ec0e45e-87cc-4b67-b137-ac7179bf7d74\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420265-vsxwc" Dec 08 17:45:00 crc kubenswrapper[5118]: I1208 17:45:00.257959 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3ec0e45e-87cc-4b67-b137-ac7179bf7d74-secret-volume\") pod \"collect-profiles-29420265-vsxwc\" (UID: \"3ec0e45e-87cc-4b67-b137-ac7179bf7d74\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420265-vsxwc" Dec 08 17:45:00 crc kubenswrapper[5118]: I1208 17:45:00.358689 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3ec0e45e-87cc-4b67-b137-ac7179bf7d74-config-volume\") pod \"collect-profiles-29420265-vsxwc\" (UID: \"3ec0e45e-87cc-4b67-b137-ac7179bf7d74\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420265-vsxwc" Dec 08 17:45:00 crc kubenswrapper[5118]: I1208 17:45:00.358948 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3ec0e45e-87cc-4b67-b137-ac7179bf7d74-secret-volume\") pod \"collect-profiles-29420265-vsxwc\" (UID: \"3ec0e45e-87cc-4b67-b137-ac7179bf7d74\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420265-vsxwc" Dec 08 17:45:00 crc kubenswrapper[5118]: I1208 17:45:00.359062 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bjh6d\" (UniqueName: \"kubernetes.io/projected/3ec0e45e-87cc-4b67-b137-ac7179bf7d74-kube-api-access-bjh6d\") pod \"collect-profiles-29420265-vsxwc\" (UID: \"3ec0e45e-87cc-4b67-b137-ac7179bf7d74\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420265-vsxwc" Dec 08 17:45:00 crc kubenswrapper[5118]: I1208 17:45:00.359465 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3ec0e45e-87cc-4b67-b137-ac7179bf7d74-config-volume\") pod \"collect-profiles-29420265-vsxwc\" (UID: \"3ec0e45e-87cc-4b67-b137-ac7179bf7d74\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420265-vsxwc" Dec 08 17:45:00 crc kubenswrapper[5118]: I1208 17:45:00.364583 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3ec0e45e-87cc-4b67-b137-ac7179bf7d74-secret-volume\") pod \"collect-profiles-29420265-vsxwc\" (UID: \"3ec0e45e-87cc-4b67-b137-ac7179bf7d74\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420265-vsxwc" Dec 08 17:45:00 crc kubenswrapper[5118]: I1208 17:45:00.378028 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjh6d\" (UniqueName: \"kubernetes.io/projected/3ec0e45e-87cc-4b67-b137-ac7179bf7d74-kube-api-access-bjh6d\") pod \"collect-profiles-29420265-vsxwc\" (UID: \"3ec0e45e-87cc-4b67-b137-ac7179bf7d74\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420265-vsxwc" Dec 08 17:45:00 crc kubenswrapper[5118]: I1208 17:45:00.474213 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29420265-vsxwc" Dec 08 17:45:00 crc kubenswrapper[5118]: I1208 17:45:00.631176 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w7jrs" Dec 08 17:45:00 crc kubenswrapper[5118]: I1208 17:45:00.764033 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba520484-b334-4e08-8f1a-5eb554b62dc4-utilities\") pod \"ba520484-b334-4e08-8f1a-5eb554b62dc4\" (UID: \"ba520484-b334-4e08-8f1a-5eb554b62dc4\") " Dec 08 17:45:00 crc kubenswrapper[5118]: I1208 17:45:00.764137 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba520484-b334-4e08-8f1a-5eb554b62dc4-catalog-content\") pod \"ba520484-b334-4e08-8f1a-5eb554b62dc4\" (UID: \"ba520484-b334-4e08-8f1a-5eb554b62dc4\") " Dec 08 17:45:00 crc kubenswrapper[5118]: I1208 17:45:00.764164 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hb86b\" (UniqueName: \"kubernetes.io/projected/ba520484-b334-4e08-8f1a-5eb554b62dc4-kube-api-access-hb86b\") pod \"ba520484-b334-4e08-8f1a-5eb554b62dc4\" (UID: \"ba520484-b334-4e08-8f1a-5eb554b62dc4\") " Dec 08 17:45:00 crc kubenswrapper[5118]: I1208 17:45:00.765230 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba520484-b334-4e08-8f1a-5eb554b62dc4-utilities" (OuterVolumeSpecName: "utilities") pod "ba520484-b334-4e08-8f1a-5eb554b62dc4" (UID: "ba520484-b334-4e08-8f1a-5eb554b62dc4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:45:00 crc kubenswrapper[5118]: I1208 17:45:00.768147 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba520484-b334-4e08-8f1a-5eb554b62dc4-kube-api-access-hb86b" (OuterVolumeSpecName: "kube-api-access-hb86b") pod "ba520484-b334-4e08-8f1a-5eb554b62dc4" (UID: "ba520484-b334-4e08-8f1a-5eb554b62dc4"). InnerVolumeSpecName "kube-api-access-hb86b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:45:00 crc kubenswrapper[5118]: I1208 17:45:00.846592 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29420265-vsxwc"] Dec 08 17:45:00 crc kubenswrapper[5118]: W1208 17:45:00.854395 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3ec0e45e_87cc_4b67_b137_ac7179bf7d74.slice/crio-7810789719cfd0fd8d7090c5dccc5d21f47fb6dcf6d7d8296f3c4eea157842d1 WatchSource:0}: Error finding container 7810789719cfd0fd8d7090c5dccc5d21f47fb6dcf6d7d8296f3c4eea157842d1: Status 404 returned error can't find the container with id 7810789719cfd0fd8d7090c5dccc5d21f47fb6dcf6d7d8296f3c4eea157842d1 Dec 08 17:45:00 crc kubenswrapper[5118]: I1208 17:45:00.859819 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba520484-b334-4e08-8f1a-5eb554b62dc4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ba520484-b334-4e08-8f1a-5eb554b62dc4" (UID: "ba520484-b334-4e08-8f1a-5eb554b62dc4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:45:00 crc kubenswrapper[5118]: I1208 17:45:00.864976 5118 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba520484-b334-4e08-8f1a-5eb554b62dc4-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 17:45:00 crc kubenswrapper[5118]: I1208 17:45:00.864996 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hb86b\" (UniqueName: \"kubernetes.io/projected/ba520484-b334-4e08-8f1a-5eb554b62dc4-kube-api-access-hb86b\") on node \"crc\" DevicePath \"\"" Dec 08 17:45:00 crc kubenswrapper[5118]: I1208 17:45:00.865007 5118 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba520484-b334-4e08-8f1a-5eb554b62dc4-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 17:45:00 crc kubenswrapper[5118]: I1208 17:45:00.962521 5118 generic.go:358] "Generic (PLEG): container finished" podID="ba520484-b334-4e08-8f1a-5eb554b62dc4" containerID="baabe0f04287772b04bb472e8d2d02239ab7bd90c654fc6e0ed56593df06f34c" exitCode=0 Dec 08 17:45:00 crc kubenswrapper[5118]: I1208 17:45:00.962649 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w7jrs" event={"ID":"ba520484-b334-4e08-8f1a-5eb554b62dc4","Type":"ContainerDied","Data":"baabe0f04287772b04bb472e8d2d02239ab7bd90c654fc6e0ed56593df06f34c"} Dec 08 17:45:00 crc kubenswrapper[5118]: I1208 17:45:00.962670 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w7jrs" Dec 08 17:45:00 crc kubenswrapper[5118]: I1208 17:45:00.962706 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w7jrs" event={"ID":"ba520484-b334-4e08-8f1a-5eb554b62dc4","Type":"ContainerDied","Data":"79fd674b2f1982666d841b20537687d86fe6bb801a03c4ed53a6f95d3bc986ac"} Dec 08 17:45:00 crc kubenswrapper[5118]: I1208 17:45:00.962728 5118 scope.go:117] "RemoveContainer" containerID="baabe0f04287772b04bb472e8d2d02239ab7bd90c654fc6e0ed56593df06f34c" Dec 08 17:45:00 crc kubenswrapper[5118]: I1208 17:45:00.966189 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29420265-vsxwc" event={"ID":"3ec0e45e-87cc-4b67-b137-ac7179bf7d74","Type":"ContainerStarted","Data":"a8f4220aed50eabf72f5764f10a3e579a1b73068563043c73e69f2bcc656a2b8"} Dec 08 17:45:00 crc kubenswrapper[5118]: I1208 17:45:00.966221 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29420265-vsxwc" event={"ID":"3ec0e45e-87cc-4b67-b137-ac7179bf7d74","Type":"ContainerStarted","Data":"7810789719cfd0fd8d7090c5dccc5d21f47fb6dcf6d7d8296f3c4eea157842d1"} Dec 08 17:45:00 crc kubenswrapper[5118]: I1208 17:45:00.982497 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29420265-vsxwc" podStartSLOduration=0.982471135 podStartE2EDuration="982.471135ms" podCreationTimestamp="2025-12-08 17:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:45:00.982177087 +0000 UTC m=+157.883501181" watchObservedRunningTime="2025-12-08 17:45:00.982471135 +0000 UTC m=+157.883795239" Dec 08 17:45:00 crc kubenswrapper[5118]: I1208 17:45:00.993450 5118 scope.go:117] "RemoveContainer" containerID="3fa7a04a596e2eb73ff66ab98810782e6c228fd6e4d3c94762442646a4e1f704" Dec 08 17:45:00 crc kubenswrapper[5118]: I1208 17:45:00.996681 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-w7jrs"] Dec 08 17:45:01 crc kubenswrapper[5118]: I1208 17:45:01.001134 5118 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-w7jrs"] Dec 08 17:45:01 crc kubenswrapper[5118]: I1208 17:45:01.014300 5118 scope.go:117] "RemoveContainer" containerID="fa91142ce831173794c13aefc78337545548d8dcf1cc288b83917ac7236eb69e" Dec 08 17:45:01 crc kubenswrapper[5118]: I1208 17:45:01.042565 5118 scope.go:117] "RemoveContainer" containerID="baabe0f04287772b04bb472e8d2d02239ab7bd90c654fc6e0ed56593df06f34c" Dec 08 17:45:01 crc kubenswrapper[5118]: E1208 17:45:01.043193 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"baabe0f04287772b04bb472e8d2d02239ab7bd90c654fc6e0ed56593df06f34c\": container with ID starting with baabe0f04287772b04bb472e8d2d02239ab7bd90c654fc6e0ed56593df06f34c not found: ID does not exist" containerID="baabe0f04287772b04bb472e8d2d02239ab7bd90c654fc6e0ed56593df06f34c" Dec 08 17:45:01 crc kubenswrapper[5118]: I1208 17:45:01.043344 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"baabe0f04287772b04bb472e8d2d02239ab7bd90c654fc6e0ed56593df06f34c"} err="failed to get container status \"baabe0f04287772b04bb472e8d2d02239ab7bd90c654fc6e0ed56593df06f34c\": rpc error: code = NotFound desc = could not find container \"baabe0f04287772b04bb472e8d2d02239ab7bd90c654fc6e0ed56593df06f34c\": container with ID starting with baabe0f04287772b04bb472e8d2d02239ab7bd90c654fc6e0ed56593df06f34c not found: ID does not exist" Dec 08 17:45:01 crc kubenswrapper[5118]: I1208 17:45:01.043466 5118 scope.go:117] "RemoveContainer" containerID="3fa7a04a596e2eb73ff66ab98810782e6c228fd6e4d3c94762442646a4e1f704" Dec 08 17:45:01 crc kubenswrapper[5118]: E1208 17:45:01.044423 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3fa7a04a596e2eb73ff66ab98810782e6c228fd6e4d3c94762442646a4e1f704\": container with ID starting with 3fa7a04a596e2eb73ff66ab98810782e6c228fd6e4d3c94762442646a4e1f704 not found: ID does not exist" containerID="3fa7a04a596e2eb73ff66ab98810782e6c228fd6e4d3c94762442646a4e1f704" Dec 08 17:45:01 crc kubenswrapper[5118]: I1208 17:45:01.044462 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3fa7a04a596e2eb73ff66ab98810782e6c228fd6e4d3c94762442646a4e1f704"} err="failed to get container status \"3fa7a04a596e2eb73ff66ab98810782e6c228fd6e4d3c94762442646a4e1f704\": rpc error: code = NotFound desc = could not find container \"3fa7a04a596e2eb73ff66ab98810782e6c228fd6e4d3c94762442646a4e1f704\": container with ID starting with 3fa7a04a596e2eb73ff66ab98810782e6c228fd6e4d3c94762442646a4e1f704 not found: ID does not exist" Dec 08 17:45:01 crc kubenswrapper[5118]: I1208 17:45:01.044488 5118 scope.go:117] "RemoveContainer" containerID="fa91142ce831173794c13aefc78337545548d8dcf1cc288b83917ac7236eb69e" Dec 08 17:45:01 crc kubenswrapper[5118]: E1208 17:45:01.044803 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa91142ce831173794c13aefc78337545548d8dcf1cc288b83917ac7236eb69e\": container with ID starting with fa91142ce831173794c13aefc78337545548d8dcf1cc288b83917ac7236eb69e not found: ID does not exist" containerID="fa91142ce831173794c13aefc78337545548d8dcf1cc288b83917ac7236eb69e" Dec 08 17:45:01 crc kubenswrapper[5118]: I1208 17:45:01.044835 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa91142ce831173794c13aefc78337545548d8dcf1cc288b83917ac7236eb69e"} err="failed to get container status \"fa91142ce831173794c13aefc78337545548d8dcf1cc288b83917ac7236eb69e\": rpc error: code = NotFound desc = could not find container \"fa91142ce831173794c13aefc78337545548d8dcf1cc288b83917ac7236eb69e\": container with ID starting with fa91142ce831173794c13aefc78337545548d8dcf1cc288b83917ac7236eb69e not found: ID does not exist" Dec 08 17:45:01 crc kubenswrapper[5118]: I1208 17:45:01.435426 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c05f773-74bd-433b-84ce-a7f5430d9b55" path="/var/lib/kubelet/pods/8c05f773-74bd-433b-84ce-a7f5430d9b55/volumes" Dec 08 17:45:01 crc kubenswrapper[5118]: I1208 17:45:01.436980 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba520484-b334-4e08-8f1a-5eb554b62dc4" path="/var/lib/kubelet/pods/ba520484-b334-4e08-8f1a-5eb554b62dc4/volumes" Dec 08 17:45:01 crc kubenswrapper[5118]: I1208 17:45:01.973436 5118 generic.go:358] "Generic (PLEG): container finished" podID="3ec0e45e-87cc-4b67-b137-ac7179bf7d74" containerID="a8f4220aed50eabf72f5764f10a3e579a1b73068563043c73e69f2bcc656a2b8" exitCode=0 Dec 08 17:45:01 crc kubenswrapper[5118]: I1208 17:45:01.974642 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29420265-vsxwc" event={"ID":"3ec0e45e-87cc-4b67-b137-ac7179bf7d74","Type":"ContainerDied","Data":"a8f4220aed50eabf72f5764f10a3e579a1b73068563043c73e69f2bcc656a2b8"} Dec 08 17:45:03 crc kubenswrapper[5118]: I1208 17:45:03.311118 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29420265-vsxwc" Dec 08 17:45:03 crc kubenswrapper[5118]: I1208 17:45:03.498574 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3ec0e45e-87cc-4b67-b137-ac7179bf7d74-secret-volume\") pod \"3ec0e45e-87cc-4b67-b137-ac7179bf7d74\" (UID: \"3ec0e45e-87cc-4b67-b137-ac7179bf7d74\") " Dec 08 17:45:03 crc kubenswrapper[5118]: I1208 17:45:03.498713 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3ec0e45e-87cc-4b67-b137-ac7179bf7d74-config-volume\") pod \"3ec0e45e-87cc-4b67-b137-ac7179bf7d74\" (UID: \"3ec0e45e-87cc-4b67-b137-ac7179bf7d74\") " Dec 08 17:45:03 crc kubenswrapper[5118]: I1208 17:45:03.498994 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bjh6d\" (UniqueName: \"kubernetes.io/projected/3ec0e45e-87cc-4b67-b137-ac7179bf7d74-kube-api-access-bjh6d\") pod \"3ec0e45e-87cc-4b67-b137-ac7179bf7d74\" (UID: \"3ec0e45e-87cc-4b67-b137-ac7179bf7d74\") " Dec 08 17:45:03 crc kubenswrapper[5118]: I1208 17:45:03.499451 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ec0e45e-87cc-4b67-b137-ac7179bf7d74-config-volume" (OuterVolumeSpecName: "config-volume") pod "3ec0e45e-87cc-4b67-b137-ac7179bf7d74" (UID: "3ec0e45e-87cc-4b67-b137-ac7179bf7d74"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:45:03 crc kubenswrapper[5118]: I1208 17:45:03.499727 5118 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3ec0e45e-87cc-4b67-b137-ac7179bf7d74-config-volume\") on node \"crc\" DevicePath \"\"" Dec 08 17:45:03 crc kubenswrapper[5118]: I1208 17:45:03.504584 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ec0e45e-87cc-4b67-b137-ac7179bf7d74-kube-api-access-bjh6d" (OuterVolumeSpecName: "kube-api-access-bjh6d") pod "3ec0e45e-87cc-4b67-b137-ac7179bf7d74" (UID: "3ec0e45e-87cc-4b67-b137-ac7179bf7d74"). InnerVolumeSpecName "kube-api-access-bjh6d". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:45:03 crc kubenswrapper[5118]: I1208 17:45:03.506058 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ec0e45e-87cc-4b67-b137-ac7179bf7d74-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "3ec0e45e-87cc-4b67-b137-ac7179bf7d74" (UID: "3ec0e45e-87cc-4b67-b137-ac7179bf7d74"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:45:03 crc kubenswrapper[5118]: I1208 17:45:03.600669 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-bjh6d\" (UniqueName: \"kubernetes.io/projected/3ec0e45e-87cc-4b67-b137-ac7179bf7d74-kube-api-access-bjh6d\") on node \"crc\" DevicePath \"\"" Dec 08 17:45:03 crc kubenswrapper[5118]: I1208 17:45:03.601103 5118 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3ec0e45e-87cc-4b67-b137-ac7179bf7d74-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 08 17:45:03 crc kubenswrapper[5118]: I1208 17:45:03.798455 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Dec 08 17:45:03 crc kubenswrapper[5118]: I1208 17:45:03.799150 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ba520484-b334-4e08-8f1a-5eb554b62dc4" containerName="extract-utilities" Dec 08 17:45:03 crc kubenswrapper[5118]: I1208 17:45:03.799165 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba520484-b334-4e08-8f1a-5eb554b62dc4" containerName="extract-utilities" Dec 08 17:45:03 crc kubenswrapper[5118]: I1208 17:45:03.799178 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ba520484-b334-4e08-8f1a-5eb554b62dc4" containerName="extract-content" Dec 08 17:45:03 crc kubenswrapper[5118]: I1208 17:45:03.799187 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba520484-b334-4e08-8f1a-5eb554b62dc4" containerName="extract-content" Dec 08 17:45:03 crc kubenswrapper[5118]: I1208 17:45:03.799203 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3ec0e45e-87cc-4b67-b137-ac7179bf7d74" containerName="collect-profiles" Dec 08 17:45:03 crc kubenswrapper[5118]: I1208 17:45:03.799212 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ec0e45e-87cc-4b67-b137-ac7179bf7d74" containerName="collect-profiles" Dec 08 17:45:03 crc kubenswrapper[5118]: I1208 17:45:03.799251 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ba520484-b334-4e08-8f1a-5eb554b62dc4" containerName="registry-server" Dec 08 17:45:03 crc kubenswrapper[5118]: I1208 17:45:03.799258 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba520484-b334-4e08-8f1a-5eb554b62dc4" containerName="registry-server" Dec 08 17:45:03 crc kubenswrapper[5118]: I1208 17:45:03.799362 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="3ec0e45e-87cc-4b67-b137-ac7179bf7d74" containerName="collect-profiles" Dec 08 17:45:03 crc kubenswrapper[5118]: I1208 17:45:03.799378 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="ba520484-b334-4e08-8f1a-5eb554b62dc4" containerName="registry-server" Dec 08 17:45:03 crc kubenswrapper[5118]: I1208 17:45:03.802948 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 08 17:45:03 crc kubenswrapper[5118]: I1208 17:45:03.805411 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver\"/\"kube-root-ca.crt\"" Dec 08 17:45:03 crc kubenswrapper[5118]: I1208 17:45:03.805610 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver\"/\"installer-sa-dockercfg-bqqnb\"" Dec 08 17:45:03 crc kubenswrapper[5118]: I1208 17:45:03.810156 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Dec 08 17:45:03 crc kubenswrapper[5118]: I1208 17:45:03.905183 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1087bc4c-df19-4954-92b2-e9bfc266fdab-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"1087bc4c-df19-4954-92b2-e9bfc266fdab\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 08 17:45:03 crc kubenswrapper[5118]: I1208 17:45:03.905379 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1087bc4c-df19-4954-92b2-e9bfc266fdab-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"1087bc4c-df19-4954-92b2-e9bfc266fdab\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 08 17:45:03 crc kubenswrapper[5118]: I1208 17:45:03.987449 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29420265-vsxwc" Dec 08 17:45:03 crc kubenswrapper[5118]: I1208 17:45:03.987451 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29420265-vsxwc" event={"ID":"3ec0e45e-87cc-4b67-b137-ac7179bf7d74","Type":"ContainerDied","Data":"7810789719cfd0fd8d7090c5dccc5d21f47fb6dcf6d7d8296f3c4eea157842d1"} Dec 08 17:45:03 crc kubenswrapper[5118]: I1208 17:45:03.987570 5118 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7810789719cfd0fd8d7090c5dccc5d21f47fb6dcf6d7d8296f3c4eea157842d1" Dec 08 17:45:04 crc kubenswrapper[5118]: I1208 17:45:04.006389 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1087bc4c-df19-4954-92b2-e9bfc266fdab-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"1087bc4c-df19-4954-92b2-e9bfc266fdab\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 08 17:45:04 crc kubenswrapper[5118]: I1208 17:45:04.006458 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1087bc4c-df19-4954-92b2-e9bfc266fdab-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"1087bc4c-df19-4954-92b2-e9bfc266fdab\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 08 17:45:04 crc kubenswrapper[5118]: I1208 17:45:04.006695 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1087bc4c-df19-4954-92b2-e9bfc266fdab-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"1087bc4c-df19-4954-92b2-e9bfc266fdab\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 08 17:45:04 crc kubenswrapper[5118]: I1208 17:45:04.022827 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1087bc4c-df19-4954-92b2-e9bfc266fdab-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"1087bc4c-df19-4954-92b2-e9bfc266fdab\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 08 17:45:04 crc kubenswrapper[5118]: I1208 17:45:04.125286 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 08 17:45:04 crc kubenswrapper[5118]: I1208 17:45:04.538832 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Dec 08 17:45:04 crc kubenswrapper[5118]: W1208 17:45:04.551070 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod1087bc4c_df19_4954_92b2_e9bfc266fdab.slice/crio-1e64deb33e2db93be18e177b42b0973ac5c5d3af629b652c416cc89d95dd00d3 WatchSource:0}: Error finding container 1e64deb33e2db93be18e177b42b0973ac5c5d3af629b652c416cc89d95dd00d3: Status 404 returned error can't find the container with id 1e64deb33e2db93be18e177b42b0973ac5c5d3af629b652c416cc89d95dd00d3 Dec 08 17:45:04 crc kubenswrapper[5118]: I1208 17:45:04.997295 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"1087bc4c-df19-4954-92b2-e9bfc266fdab","Type":"ContainerStarted","Data":"bb803c1dff52085d1804ee139c86a9e998c519f17eca048a79448b657d0a5061"} Dec 08 17:45:04 crc kubenswrapper[5118]: I1208 17:45:04.997860 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"1087bc4c-df19-4954-92b2-e9bfc266fdab","Type":"ContainerStarted","Data":"1e64deb33e2db93be18e177b42b0973ac5c5d3af629b652c416cc89d95dd00d3"} Dec 08 17:45:05 crc kubenswrapper[5118]: I1208 17:45:05.013468 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-12-crc" podStartSLOduration=2.013446297 podStartE2EDuration="2.013446297s" podCreationTimestamp="2025-12-08 17:45:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:45:05.009965061 +0000 UTC m=+161.911289165" watchObservedRunningTime="2025-12-08 17:45:05.013446297 +0000 UTC m=+161.914770391" Dec 08 17:45:05 crc kubenswrapper[5118]: I1208 17:45:05.439937 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66587d64c8-s6hn4" Dec 08 17:45:06 crc kubenswrapper[5118]: I1208 17:45:06.003144 5118 generic.go:358] "Generic (PLEG): container finished" podID="1087bc4c-df19-4954-92b2-e9bfc266fdab" containerID="bb803c1dff52085d1804ee139c86a9e998c519f17eca048a79448b657d0a5061" exitCode=0 Dec 08 17:45:06 crc kubenswrapper[5118]: I1208 17:45:06.003644 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"1087bc4c-df19-4954-92b2-e9bfc266fdab","Type":"ContainerDied","Data":"bb803c1dff52085d1804ee139c86a9e998c519f17eca048a79448b657d0a5061"} Dec 08 17:45:06 crc kubenswrapper[5118]: I1208 17:45:06.295170 5118 ???:1] "http: TLS handshake error from 192.168.126.11:59430: no serving certificate available for the kubelet" Dec 08 17:45:06 crc kubenswrapper[5118]: I1208 17:45:06.444176 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-d8qsj" Dec 08 17:45:07 crc kubenswrapper[5118]: I1208 17:45:07.316884 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 08 17:45:07 crc kubenswrapper[5118]: I1208 17:45:07.453865 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1087bc4c-df19-4954-92b2-e9bfc266fdab-kube-api-access\") pod \"1087bc4c-df19-4954-92b2-e9bfc266fdab\" (UID: \"1087bc4c-df19-4954-92b2-e9bfc266fdab\") " Dec 08 17:45:07 crc kubenswrapper[5118]: I1208 17:45:07.453934 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1087bc4c-df19-4954-92b2-e9bfc266fdab-kubelet-dir\") pod \"1087bc4c-df19-4954-92b2-e9bfc266fdab\" (UID: \"1087bc4c-df19-4954-92b2-e9bfc266fdab\") " Dec 08 17:45:07 crc kubenswrapper[5118]: I1208 17:45:07.454115 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1087bc4c-df19-4954-92b2-e9bfc266fdab-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "1087bc4c-df19-4954-92b2-e9bfc266fdab" (UID: "1087bc4c-df19-4954-92b2-e9bfc266fdab"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 17:45:07 crc kubenswrapper[5118]: I1208 17:45:07.454323 5118 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1087bc4c-df19-4954-92b2-e9bfc266fdab-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 08 17:45:07 crc kubenswrapper[5118]: I1208 17:45:07.467619 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1087bc4c-df19-4954-92b2-e9bfc266fdab-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1087bc4c-df19-4954-92b2-e9bfc266fdab" (UID: "1087bc4c-df19-4954-92b2-e9bfc266fdab"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:45:07 crc kubenswrapper[5118]: I1208 17:45:07.555232 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1087bc4c-df19-4954-92b2-e9bfc266fdab-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 08 17:45:08 crc kubenswrapper[5118]: I1208 17:45:08.018737 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"1087bc4c-df19-4954-92b2-e9bfc266fdab","Type":"ContainerDied","Data":"1e64deb33e2db93be18e177b42b0973ac5c5d3af629b652c416cc89d95dd00d3"} Dec 08 17:45:08 crc kubenswrapper[5118]: I1208 17:45:08.018798 5118 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1e64deb33e2db93be18e177b42b0973ac5c5d3af629b652c416cc89d95dd00d3" Dec 08 17:45:08 crc kubenswrapper[5118]: I1208 17:45:08.018795 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 08 17:45:10 crc kubenswrapper[5118]: I1208 17:45:10.598614 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Dec 08 17:45:10 crc kubenswrapper[5118]: I1208 17:45:10.599805 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1087bc4c-df19-4954-92b2-e9bfc266fdab" containerName="pruner" Dec 08 17:45:10 crc kubenswrapper[5118]: I1208 17:45:10.599825 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="1087bc4c-df19-4954-92b2-e9bfc266fdab" containerName="pruner" Dec 08 17:45:10 crc kubenswrapper[5118]: I1208 17:45:10.599998 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="1087bc4c-df19-4954-92b2-e9bfc266fdab" containerName="pruner" Dec 08 17:45:11 crc kubenswrapper[5118]: I1208 17:45:11.364326 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Dec 08 17:45:11 crc kubenswrapper[5118]: I1208 17:45:11.364621 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Dec 08 17:45:11 crc kubenswrapper[5118]: I1208 17:45:11.368778 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver\"/\"installer-sa-dockercfg-bqqnb\"" Dec 08 17:45:11 crc kubenswrapper[5118]: I1208 17:45:11.369121 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver\"/\"kube-root-ca.crt\"" Dec 08 17:45:11 crc kubenswrapper[5118]: I1208 17:45:11.397215 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/158725bd-7556-4281-a3cb-acaa6baf5d8c-kube-api-access\") pod \"installer-12-crc\" (UID: \"158725bd-7556-4281-a3cb-acaa6baf5d8c\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 08 17:45:11 crc kubenswrapper[5118]: I1208 17:45:11.397351 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/158725bd-7556-4281-a3cb-acaa6baf5d8c-var-lock\") pod \"installer-12-crc\" (UID: \"158725bd-7556-4281-a3cb-acaa6baf5d8c\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 08 17:45:11 crc kubenswrapper[5118]: I1208 17:45:11.397495 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/158725bd-7556-4281-a3cb-acaa6baf5d8c-kubelet-dir\") pod \"installer-12-crc\" (UID: \"158725bd-7556-4281-a3cb-acaa6baf5d8c\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 08 17:45:11 crc kubenswrapper[5118]: I1208 17:45:11.498424 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/158725bd-7556-4281-a3cb-acaa6baf5d8c-kube-api-access\") pod \"installer-12-crc\" (UID: \"158725bd-7556-4281-a3cb-acaa6baf5d8c\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 08 17:45:11 crc kubenswrapper[5118]: I1208 17:45:11.498508 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/158725bd-7556-4281-a3cb-acaa6baf5d8c-var-lock\") pod \"installer-12-crc\" (UID: \"158725bd-7556-4281-a3cb-acaa6baf5d8c\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 08 17:45:11 crc kubenswrapper[5118]: I1208 17:45:11.498584 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/158725bd-7556-4281-a3cb-acaa6baf5d8c-kubelet-dir\") pod \"installer-12-crc\" (UID: \"158725bd-7556-4281-a3cb-acaa6baf5d8c\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 08 17:45:11 crc kubenswrapper[5118]: I1208 17:45:11.498615 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/158725bd-7556-4281-a3cb-acaa6baf5d8c-var-lock\") pod \"installer-12-crc\" (UID: \"158725bd-7556-4281-a3cb-acaa6baf5d8c\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 08 17:45:11 crc kubenswrapper[5118]: I1208 17:45:11.498669 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/158725bd-7556-4281-a3cb-acaa6baf5d8c-kubelet-dir\") pod \"installer-12-crc\" (UID: \"158725bd-7556-4281-a3cb-acaa6baf5d8c\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 08 17:45:11 crc kubenswrapper[5118]: I1208 17:45:11.519368 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/158725bd-7556-4281-a3cb-acaa6baf5d8c-kube-api-access\") pod \"installer-12-crc\" (UID: \"158725bd-7556-4281-a3cb-acaa6baf5d8c\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 08 17:45:11 crc kubenswrapper[5118]: I1208 17:45:11.691916 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Dec 08 17:45:12 crc kubenswrapper[5118]: I1208 17:45:12.117527 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Dec 08 17:45:13 crc kubenswrapper[5118]: I1208 17:45:13.046056 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"158725bd-7556-4281-a3cb-acaa6baf5d8c","Type":"ContainerStarted","Data":"509952fab5a13bc2060cdb8baf4a28712771b3cbd148240cee05b4f9cff4bcc8"} Dec 08 17:45:13 crc kubenswrapper[5118]: I1208 17:45:13.046464 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"158725bd-7556-4281-a3cb-acaa6baf5d8c","Type":"ContainerStarted","Data":"b8a864a71dfd0d83f1246550bc2ef6c6048487044c12c63e1738f0ca50d342f8"} Dec 08 17:45:13 crc kubenswrapper[5118]: I1208 17:45:13.064972 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-12-crc" podStartSLOduration=3.0649524 podStartE2EDuration="3.0649524s" podCreationTimestamp="2025-12-08 17:45:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:45:13.06421615 +0000 UTC m=+169.965540244" watchObservedRunningTime="2025-12-08 17:45:13.0649524 +0000 UTC m=+169.966276514" Dec 08 17:45:15 crc kubenswrapper[5118]: I1208 17:45:15.850983 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 17:45:22 crc kubenswrapper[5118]: I1208 17:45:22.093090 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-ztdrc"] Dec 08 17:45:47 crc kubenswrapper[5118]: I1208 17:45:47.130461 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-66458b6674-ztdrc" podUID="9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e" containerName="oauth-openshift" containerID="cri-o://8226ae0bddef3b2ba00ae57f90dc81a0d0635b1a410c23d88f9acdb5e8682af6" gracePeriod=15 Dec 08 17:45:47 crc kubenswrapper[5118]: I1208 17:45:47.298040 5118 ???:1] "http: TLS handshake error from 192.168.126.11:52838: no serving certificate available for the kubelet" Dec 08 17:45:47 crc kubenswrapper[5118]: I1208 17:45:47.541989 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-ztdrc" Dec 08 17:45:47 crc kubenswrapper[5118]: I1208 17:45:47.570558 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-57ffdf54dd-5dg99"] Dec 08 17:45:47 crc kubenswrapper[5118]: I1208 17:45:47.571209 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e" containerName="oauth-openshift" Dec 08 17:45:47 crc kubenswrapper[5118]: I1208 17:45:47.571231 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e" containerName="oauth-openshift" Dec 08 17:45:47 crc kubenswrapper[5118]: I1208 17:45:47.571327 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e" containerName="oauth-openshift" Dec 08 17:45:47 crc kubenswrapper[5118]: I1208 17:45:47.574794 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-57ffdf54dd-5dg99" Dec 08 17:45:47 crc kubenswrapper[5118]: I1208 17:45:47.581313 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-57ffdf54dd-5dg99"] Dec 08 17:45:47 crc kubenswrapper[5118]: I1208 17:45:47.659296 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e-v4-0-config-user-template-login\") pod \"9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e\" (UID: \"9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e\") " Dec 08 17:45:47 crc kubenswrapper[5118]: I1208 17:45:47.659352 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e-v4-0-config-user-template-error\") pod \"9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e\" (UID: \"9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e\") " Dec 08 17:45:47 crc kubenswrapper[5118]: I1208 17:45:47.659392 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e-v4-0-config-user-template-provider-selection\") pod \"9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e\" (UID: \"9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e\") " Dec 08 17:45:47 crc kubenswrapper[5118]: I1208 17:45:47.659413 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e-v4-0-config-system-serving-cert\") pod \"9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e\" (UID: \"9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e\") " Dec 08 17:45:47 crc kubenswrapper[5118]: I1208 17:45:47.659435 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e-v4-0-config-system-router-certs\") pod \"9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e\" (UID: \"9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e\") " Dec 08 17:45:47 crc kubenswrapper[5118]: I1208 17:45:47.659495 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e-audit-dir\") pod \"9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e\" (UID: \"9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e\") " Dec 08 17:45:47 crc kubenswrapper[5118]: I1208 17:45:47.659516 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e-audit-policies\") pod \"9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e\" (UID: \"9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e\") " Dec 08 17:45:47 crc kubenswrapper[5118]: I1208 17:45:47.659552 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e-v4-0-config-system-service-ca\") pod \"9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e\" (UID: \"9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e\") " Dec 08 17:45:47 crc kubenswrapper[5118]: I1208 17:45:47.659584 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jb8vr\" (UniqueName: \"kubernetes.io/projected/9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e-kube-api-access-jb8vr\") pod \"9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e\" (UID: \"9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e\") " Dec 08 17:45:47 crc kubenswrapper[5118]: I1208 17:45:47.659635 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e-v4-0-config-system-session\") pod \"9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e\" (UID: \"9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e\") " Dec 08 17:45:47 crc kubenswrapper[5118]: I1208 17:45:47.659690 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e-v4-0-config-system-ocp-branding-template\") pod \"9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e\" (UID: \"9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e\") " Dec 08 17:45:47 crc kubenswrapper[5118]: I1208 17:45:47.659727 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e-v4-0-config-user-idp-0-file-data\") pod \"9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e\" (UID: \"9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e\") " Dec 08 17:45:47 crc kubenswrapper[5118]: I1208 17:45:47.659797 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e-v4-0-config-system-trusted-ca-bundle\") pod \"9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e\" (UID: \"9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e\") " Dec 08 17:45:47 crc kubenswrapper[5118]: I1208 17:45:47.659835 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e-v4-0-config-system-cliconfig\") pod \"9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e\" (UID: \"9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e\") " Dec 08 17:45:47 crc kubenswrapper[5118]: I1208 17:45:47.659985 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0c242c34-d446-4428-b8d7-0b8dbf4137c9-v4-0-config-user-template-login\") pod \"oauth-openshift-57ffdf54dd-5dg99\" (UID: \"0c242c34-d446-4428-b8d7-0b8dbf4137c9\") " pod="openshift-authentication/oauth-openshift-57ffdf54dd-5dg99" Dec 08 17:45:47 crc kubenswrapper[5118]: I1208 17:45:47.660024 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0c242c34-d446-4428-b8d7-0b8dbf4137c9-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-57ffdf54dd-5dg99\" (UID: \"0c242c34-d446-4428-b8d7-0b8dbf4137c9\") " pod="openshift-authentication/oauth-openshift-57ffdf54dd-5dg99" Dec 08 17:45:47 crc kubenswrapper[5118]: I1208 17:45:47.660063 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0c242c34-d446-4428-b8d7-0b8dbf4137c9-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-57ffdf54dd-5dg99\" (UID: \"0c242c34-d446-4428-b8d7-0b8dbf4137c9\") " pod="openshift-authentication/oauth-openshift-57ffdf54dd-5dg99" Dec 08 17:45:47 crc kubenswrapper[5118]: I1208 17:45:47.660098 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0c242c34-d446-4428-b8d7-0b8dbf4137c9-v4-0-config-system-session\") pod \"oauth-openshift-57ffdf54dd-5dg99\" (UID: \"0c242c34-d446-4428-b8d7-0b8dbf4137c9\") " pod="openshift-authentication/oauth-openshift-57ffdf54dd-5dg99" Dec 08 17:45:47 crc kubenswrapper[5118]: I1208 17:45:47.660142 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0c242c34-d446-4428-b8d7-0b8dbf4137c9-v4-0-config-system-serving-cert\") pod \"oauth-openshift-57ffdf54dd-5dg99\" (UID: \"0c242c34-d446-4428-b8d7-0b8dbf4137c9\") " pod="openshift-authentication/oauth-openshift-57ffdf54dd-5dg99" Dec 08 17:45:47 crc kubenswrapper[5118]: I1208 17:45:47.660177 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0c242c34-d446-4428-b8d7-0b8dbf4137c9-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-57ffdf54dd-5dg99\" (UID: \"0c242c34-d446-4428-b8d7-0b8dbf4137c9\") " pod="openshift-authentication/oauth-openshift-57ffdf54dd-5dg99" Dec 08 17:45:47 crc kubenswrapper[5118]: I1208 17:45:47.660201 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0c242c34-d446-4428-b8d7-0b8dbf4137c9-audit-dir\") pod \"oauth-openshift-57ffdf54dd-5dg99\" (UID: \"0c242c34-d446-4428-b8d7-0b8dbf4137c9\") " pod="openshift-authentication/oauth-openshift-57ffdf54dd-5dg99" Dec 08 17:45:47 crc kubenswrapper[5118]: I1208 17:45:47.660224 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpp99\" (UniqueName: \"kubernetes.io/projected/0c242c34-d446-4428-b8d7-0b8dbf4137c9-kube-api-access-jpp99\") pod \"oauth-openshift-57ffdf54dd-5dg99\" (UID: \"0c242c34-d446-4428-b8d7-0b8dbf4137c9\") " pod="openshift-authentication/oauth-openshift-57ffdf54dd-5dg99" Dec 08 17:45:47 crc kubenswrapper[5118]: I1208 17:45:47.660255 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0c242c34-d446-4428-b8d7-0b8dbf4137c9-v4-0-config-system-router-certs\") pod \"oauth-openshift-57ffdf54dd-5dg99\" (UID: \"0c242c34-d446-4428-b8d7-0b8dbf4137c9\") " pod="openshift-authentication/oauth-openshift-57ffdf54dd-5dg99" Dec 08 17:45:47 crc kubenswrapper[5118]: I1208 17:45:47.660312 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0c242c34-d446-4428-b8d7-0b8dbf4137c9-audit-policies\") pod \"oauth-openshift-57ffdf54dd-5dg99\" (UID: \"0c242c34-d446-4428-b8d7-0b8dbf4137c9\") " pod="openshift-authentication/oauth-openshift-57ffdf54dd-5dg99" Dec 08 17:45:47 crc kubenswrapper[5118]: I1208 17:45:47.660382 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0c242c34-d446-4428-b8d7-0b8dbf4137c9-v4-0-config-system-cliconfig\") pod \"oauth-openshift-57ffdf54dd-5dg99\" (UID: \"0c242c34-d446-4428-b8d7-0b8dbf4137c9\") " pod="openshift-authentication/oauth-openshift-57ffdf54dd-5dg99" Dec 08 17:45:47 crc kubenswrapper[5118]: I1208 17:45:47.660404 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0c242c34-d446-4428-b8d7-0b8dbf4137c9-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-57ffdf54dd-5dg99\" (UID: \"0c242c34-d446-4428-b8d7-0b8dbf4137c9\") " pod="openshift-authentication/oauth-openshift-57ffdf54dd-5dg99" Dec 08 17:45:47 crc kubenswrapper[5118]: I1208 17:45:47.660443 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0c242c34-d446-4428-b8d7-0b8dbf4137c9-v4-0-config-system-service-ca\") pod \"oauth-openshift-57ffdf54dd-5dg99\" (UID: \"0c242c34-d446-4428-b8d7-0b8dbf4137c9\") " pod="openshift-authentication/oauth-openshift-57ffdf54dd-5dg99" Dec 08 17:45:47 crc kubenswrapper[5118]: I1208 17:45:47.660468 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0c242c34-d446-4428-b8d7-0b8dbf4137c9-v4-0-config-user-template-error\") pod \"oauth-openshift-57ffdf54dd-5dg99\" (UID: \"0c242c34-d446-4428-b8d7-0b8dbf4137c9\") " pod="openshift-authentication/oauth-openshift-57ffdf54dd-5dg99" Dec 08 17:45:47 crc kubenswrapper[5118]: I1208 17:45:47.661369 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e" (UID: "9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:45:47 crc kubenswrapper[5118]: I1208 17:45:47.661382 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e" (UID: "9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:45:47 crc kubenswrapper[5118]: I1208 17:45:47.661766 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e" (UID: "9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:45:47 crc kubenswrapper[5118]: I1208 17:45:47.662505 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e" (UID: "9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 17:45:47 crc kubenswrapper[5118]: I1208 17:45:47.662628 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e" (UID: "9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:45:47 crc kubenswrapper[5118]: I1208 17:45:47.667486 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e" (UID: "9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:45:47 crc kubenswrapper[5118]: I1208 17:45:47.671267 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e" (UID: "9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:45:47 crc kubenswrapper[5118]: I1208 17:45:47.671409 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e" (UID: "9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:45:47 crc kubenswrapper[5118]: I1208 17:45:47.671748 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e" (UID: "9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:45:47 crc kubenswrapper[5118]: I1208 17:45:47.671949 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e-kube-api-access-jb8vr" (OuterVolumeSpecName: "kube-api-access-jb8vr") pod "9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e" (UID: "9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e"). InnerVolumeSpecName "kube-api-access-jb8vr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:45:47 crc kubenswrapper[5118]: I1208 17:45:47.672139 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e" (UID: "9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:45:47 crc kubenswrapper[5118]: I1208 17:45:47.672369 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e" (UID: "9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:45:47 crc kubenswrapper[5118]: I1208 17:45:47.672582 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e" (UID: "9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:45:47 crc kubenswrapper[5118]: I1208 17:45:47.674598 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e" (UID: "9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:45:47 crc kubenswrapper[5118]: I1208 17:45:47.761108 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0c242c34-d446-4428-b8d7-0b8dbf4137c9-v4-0-config-system-cliconfig\") pod \"oauth-openshift-57ffdf54dd-5dg99\" (UID: \"0c242c34-d446-4428-b8d7-0b8dbf4137c9\") " pod="openshift-authentication/oauth-openshift-57ffdf54dd-5dg99" Dec 08 17:45:47 crc kubenswrapper[5118]: I1208 17:45:47.761165 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0c242c34-d446-4428-b8d7-0b8dbf4137c9-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-57ffdf54dd-5dg99\" (UID: \"0c242c34-d446-4428-b8d7-0b8dbf4137c9\") " pod="openshift-authentication/oauth-openshift-57ffdf54dd-5dg99" Dec 08 17:45:47 crc kubenswrapper[5118]: I1208 17:45:47.761203 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0c242c34-d446-4428-b8d7-0b8dbf4137c9-v4-0-config-system-service-ca\") pod \"oauth-openshift-57ffdf54dd-5dg99\" (UID: \"0c242c34-d446-4428-b8d7-0b8dbf4137c9\") " pod="openshift-authentication/oauth-openshift-57ffdf54dd-5dg99" Dec 08 17:45:47 crc kubenswrapper[5118]: I1208 17:45:47.761226 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0c242c34-d446-4428-b8d7-0b8dbf4137c9-v4-0-config-user-template-error\") pod \"oauth-openshift-57ffdf54dd-5dg99\" (UID: \"0c242c34-d446-4428-b8d7-0b8dbf4137c9\") " pod="openshift-authentication/oauth-openshift-57ffdf54dd-5dg99" Dec 08 17:45:47 crc kubenswrapper[5118]: I1208 17:45:47.761264 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0c242c34-d446-4428-b8d7-0b8dbf4137c9-v4-0-config-user-template-login\") pod \"oauth-openshift-57ffdf54dd-5dg99\" (UID: \"0c242c34-d446-4428-b8d7-0b8dbf4137c9\") " pod="openshift-authentication/oauth-openshift-57ffdf54dd-5dg99" Dec 08 17:45:47 crc kubenswrapper[5118]: I1208 17:45:47.761289 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0c242c34-d446-4428-b8d7-0b8dbf4137c9-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-57ffdf54dd-5dg99\" (UID: \"0c242c34-d446-4428-b8d7-0b8dbf4137c9\") " pod="openshift-authentication/oauth-openshift-57ffdf54dd-5dg99" Dec 08 17:45:47 crc kubenswrapper[5118]: I1208 17:45:47.761322 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0c242c34-d446-4428-b8d7-0b8dbf4137c9-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-57ffdf54dd-5dg99\" (UID: \"0c242c34-d446-4428-b8d7-0b8dbf4137c9\") " pod="openshift-authentication/oauth-openshift-57ffdf54dd-5dg99" Dec 08 17:45:47 crc kubenswrapper[5118]: I1208 17:45:47.761353 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0c242c34-d446-4428-b8d7-0b8dbf4137c9-v4-0-config-system-session\") pod \"oauth-openshift-57ffdf54dd-5dg99\" (UID: \"0c242c34-d446-4428-b8d7-0b8dbf4137c9\") " pod="openshift-authentication/oauth-openshift-57ffdf54dd-5dg99" Dec 08 17:45:47 crc kubenswrapper[5118]: I1208 17:45:47.761385 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0c242c34-d446-4428-b8d7-0b8dbf4137c9-v4-0-config-system-serving-cert\") pod \"oauth-openshift-57ffdf54dd-5dg99\" (UID: \"0c242c34-d446-4428-b8d7-0b8dbf4137c9\") " pod="openshift-authentication/oauth-openshift-57ffdf54dd-5dg99" Dec 08 17:45:47 crc kubenswrapper[5118]: I1208 17:45:47.761413 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0c242c34-d446-4428-b8d7-0b8dbf4137c9-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-57ffdf54dd-5dg99\" (UID: \"0c242c34-d446-4428-b8d7-0b8dbf4137c9\") " pod="openshift-authentication/oauth-openshift-57ffdf54dd-5dg99" Dec 08 17:45:47 crc kubenswrapper[5118]: I1208 17:45:47.761436 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0c242c34-d446-4428-b8d7-0b8dbf4137c9-audit-dir\") pod \"oauth-openshift-57ffdf54dd-5dg99\" (UID: \"0c242c34-d446-4428-b8d7-0b8dbf4137c9\") " pod="openshift-authentication/oauth-openshift-57ffdf54dd-5dg99" Dec 08 17:45:47 crc kubenswrapper[5118]: I1208 17:45:47.761458 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jpp99\" (UniqueName: \"kubernetes.io/projected/0c242c34-d446-4428-b8d7-0b8dbf4137c9-kube-api-access-jpp99\") pod \"oauth-openshift-57ffdf54dd-5dg99\" (UID: \"0c242c34-d446-4428-b8d7-0b8dbf4137c9\") " pod="openshift-authentication/oauth-openshift-57ffdf54dd-5dg99" Dec 08 17:45:47 crc kubenswrapper[5118]: I1208 17:45:47.761485 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0c242c34-d446-4428-b8d7-0b8dbf4137c9-v4-0-config-system-router-certs\") pod \"oauth-openshift-57ffdf54dd-5dg99\" (UID: \"0c242c34-d446-4428-b8d7-0b8dbf4137c9\") " pod="openshift-authentication/oauth-openshift-57ffdf54dd-5dg99" Dec 08 17:45:47 crc kubenswrapper[5118]: I1208 17:45:47.761532 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0c242c34-d446-4428-b8d7-0b8dbf4137c9-audit-policies\") pod \"oauth-openshift-57ffdf54dd-5dg99\" (UID: \"0c242c34-d446-4428-b8d7-0b8dbf4137c9\") " pod="openshift-authentication/oauth-openshift-57ffdf54dd-5dg99" Dec 08 17:45:47 crc kubenswrapper[5118]: I1208 17:45:47.761584 5118 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 17:45:47 crc kubenswrapper[5118]: I1208 17:45:47.761599 5118 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 08 17:45:47 crc kubenswrapper[5118]: I1208 17:45:47.761613 5118 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 08 17:45:47 crc kubenswrapper[5118]: I1208 17:45:47.761625 5118 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 08 17:45:47 crc kubenswrapper[5118]: I1208 17:45:47.761640 5118 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 08 17:45:47 crc kubenswrapper[5118]: I1208 17:45:47.761654 5118 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 17:45:47 crc kubenswrapper[5118]: I1208 17:45:47.761667 5118 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 08 17:45:47 crc kubenswrapper[5118]: I1208 17:45:47.761680 5118 reconciler_common.go:299] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 08 17:45:47 crc kubenswrapper[5118]: I1208 17:45:47.761692 5118 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 08 17:45:47 crc kubenswrapper[5118]: I1208 17:45:47.761704 5118 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 08 17:45:47 crc kubenswrapper[5118]: I1208 17:45:47.761716 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jb8vr\" (UniqueName: \"kubernetes.io/projected/9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e-kube-api-access-jb8vr\") on node \"crc\" DevicePath \"\"" Dec 08 17:45:47 crc kubenswrapper[5118]: I1208 17:45:47.761729 5118 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 08 17:45:47 crc kubenswrapper[5118]: I1208 17:45:47.761741 5118 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 08 17:45:47 crc kubenswrapper[5118]: I1208 17:45:47.761754 5118 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 08 17:45:47 crc kubenswrapper[5118]: I1208 17:45:47.762605 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0c242c34-d446-4428-b8d7-0b8dbf4137c9-audit-policies\") pod \"oauth-openshift-57ffdf54dd-5dg99\" (UID: \"0c242c34-d446-4428-b8d7-0b8dbf4137c9\") " pod="openshift-authentication/oauth-openshift-57ffdf54dd-5dg99" Dec 08 17:45:47 crc kubenswrapper[5118]: I1208 17:45:47.762717 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0c242c34-d446-4428-b8d7-0b8dbf4137c9-audit-dir\") pod \"oauth-openshift-57ffdf54dd-5dg99\" (UID: \"0c242c34-d446-4428-b8d7-0b8dbf4137c9\") " pod="openshift-authentication/oauth-openshift-57ffdf54dd-5dg99" Dec 08 17:45:47 crc kubenswrapper[5118]: I1208 17:45:47.763497 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0c242c34-d446-4428-b8d7-0b8dbf4137c9-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-57ffdf54dd-5dg99\" (UID: \"0c242c34-d446-4428-b8d7-0b8dbf4137c9\") " pod="openshift-authentication/oauth-openshift-57ffdf54dd-5dg99" Dec 08 17:45:47 crc kubenswrapper[5118]: I1208 17:45:47.763768 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0c242c34-d446-4428-b8d7-0b8dbf4137c9-v4-0-config-system-service-ca\") pod \"oauth-openshift-57ffdf54dd-5dg99\" (UID: \"0c242c34-d446-4428-b8d7-0b8dbf4137c9\") " pod="openshift-authentication/oauth-openshift-57ffdf54dd-5dg99" Dec 08 17:45:47 crc kubenswrapper[5118]: I1208 17:45:47.764595 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0c242c34-d446-4428-b8d7-0b8dbf4137c9-v4-0-config-system-cliconfig\") pod \"oauth-openshift-57ffdf54dd-5dg99\" (UID: \"0c242c34-d446-4428-b8d7-0b8dbf4137c9\") " pod="openshift-authentication/oauth-openshift-57ffdf54dd-5dg99" Dec 08 17:45:47 crc kubenswrapper[5118]: I1208 17:45:47.767698 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0c242c34-d446-4428-b8d7-0b8dbf4137c9-v4-0-config-system-session\") pod \"oauth-openshift-57ffdf54dd-5dg99\" (UID: \"0c242c34-d446-4428-b8d7-0b8dbf4137c9\") " pod="openshift-authentication/oauth-openshift-57ffdf54dd-5dg99" Dec 08 17:45:47 crc kubenswrapper[5118]: I1208 17:45:47.767620 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0c242c34-d446-4428-b8d7-0b8dbf4137c9-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-57ffdf54dd-5dg99\" (UID: \"0c242c34-d446-4428-b8d7-0b8dbf4137c9\") " pod="openshift-authentication/oauth-openshift-57ffdf54dd-5dg99" Dec 08 17:45:47 crc kubenswrapper[5118]: I1208 17:45:47.768838 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0c242c34-d446-4428-b8d7-0b8dbf4137c9-v4-0-config-system-router-certs\") pod \"oauth-openshift-57ffdf54dd-5dg99\" (UID: \"0c242c34-d446-4428-b8d7-0b8dbf4137c9\") " pod="openshift-authentication/oauth-openshift-57ffdf54dd-5dg99" Dec 08 17:45:47 crc kubenswrapper[5118]: I1208 17:45:47.768984 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0c242c34-d446-4428-b8d7-0b8dbf4137c9-v4-0-config-user-template-login\") pod \"oauth-openshift-57ffdf54dd-5dg99\" (UID: \"0c242c34-d446-4428-b8d7-0b8dbf4137c9\") " pod="openshift-authentication/oauth-openshift-57ffdf54dd-5dg99" Dec 08 17:45:47 crc kubenswrapper[5118]: I1208 17:45:47.769091 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0c242c34-d446-4428-b8d7-0b8dbf4137c9-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-57ffdf54dd-5dg99\" (UID: \"0c242c34-d446-4428-b8d7-0b8dbf4137c9\") " pod="openshift-authentication/oauth-openshift-57ffdf54dd-5dg99" Dec 08 17:45:47 crc kubenswrapper[5118]: I1208 17:45:47.769409 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0c242c34-d446-4428-b8d7-0b8dbf4137c9-v4-0-config-user-template-error\") pod \"oauth-openshift-57ffdf54dd-5dg99\" (UID: \"0c242c34-d446-4428-b8d7-0b8dbf4137c9\") " pod="openshift-authentication/oauth-openshift-57ffdf54dd-5dg99" Dec 08 17:45:47 crc kubenswrapper[5118]: I1208 17:45:47.769651 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0c242c34-d446-4428-b8d7-0b8dbf4137c9-v4-0-config-system-serving-cert\") pod \"oauth-openshift-57ffdf54dd-5dg99\" (UID: \"0c242c34-d446-4428-b8d7-0b8dbf4137c9\") " pod="openshift-authentication/oauth-openshift-57ffdf54dd-5dg99" Dec 08 17:45:47 crc kubenswrapper[5118]: I1208 17:45:47.771025 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0c242c34-d446-4428-b8d7-0b8dbf4137c9-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-57ffdf54dd-5dg99\" (UID: \"0c242c34-d446-4428-b8d7-0b8dbf4137c9\") " pod="openshift-authentication/oauth-openshift-57ffdf54dd-5dg99" Dec 08 17:45:47 crc kubenswrapper[5118]: I1208 17:45:47.785291 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jpp99\" (UniqueName: \"kubernetes.io/projected/0c242c34-d446-4428-b8d7-0b8dbf4137c9-kube-api-access-jpp99\") pod \"oauth-openshift-57ffdf54dd-5dg99\" (UID: \"0c242c34-d446-4428-b8d7-0b8dbf4137c9\") " pod="openshift-authentication/oauth-openshift-57ffdf54dd-5dg99" Dec 08 17:45:47 crc kubenswrapper[5118]: I1208 17:45:47.892951 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-57ffdf54dd-5dg99" Dec 08 17:45:48 crc kubenswrapper[5118]: I1208 17:45:48.105305 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-57ffdf54dd-5dg99"] Dec 08 17:45:48 crc kubenswrapper[5118]: I1208 17:45:48.263838 5118 generic.go:358] "Generic (PLEG): container finished" podID="9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e" containerID="8226ae0bddef3b2ba00ae57f90dc81a0d0635b1a410c23d88f9acdb5e8682af6" exitCode=0 Dec 08 17:45:48 crc kubenswrapper[5118]: I1208 17:45:48.263946 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-ztdrc" Dec 08 17:45:48 crc kubenswrapper[5118]: I1208 17:45:48.263953 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-ztdrc" event={"ID":"9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e","Type":"ContainerDied","Data":"8226ae0bddef3b2ba00ae57f90dc81a0d0635b1a410c23d88f9acdb5e8682af6"} Dec 08 17:45:48 crc kubenswrapper[5118]: I1208 17:45:48.264001 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-ztdrc" event={"ID":"9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e","Type":"ContainerDied","Data":"803ad93dfa8700dbf09b3e6a4e33d63e186ef2c8cc3dfa4d900a01a2b041fbcf"} Dec 08 17:45:48 crc kubenswrapper[5118]: I1208 17:45:48.264019 5118 scope.go:117] "RemoveContainer" containerID="8226ae0bddef3b2ba00ae57f90dc81a0d0635b1a410c23d88f9acdb5e8682af6" Dec 08 17:45:48 crc kubenswrapper[5118]: I1208 17:45:48.271402 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-57ffdf54dd-5dg99" event={"ID":"0c242c34-d446-4428-b8d7-0b8dbf4137c9","Type":"ContainerStarted","Data":"abc3a2e84c88a41b34688913233ffa318a8b6cce7c084027b6e31005e2a9a619"} Dec 08 17:45:48 crc kubenswrapper[5118]: I1208 17:45:48.320002 5118 scope.go:117] "RemoveContainer" containerID="8226ae0bddef3b2ba00ae57f90dc81a0d0635b1a410c23d88f9acdb5e8682af6" Dec 08 17:45:48 crc kubenswrapper[5118]: E1208 17:45:48.320428 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8226ae0bddef3b2ba00ae57f90dc81a0d0635b1a410c23d88f9acdb5e8682af6\": container with ID starting with 8226ae0bddef3b2ba00ae57f90dc81a0d0635b1a410c23d88f9acdb5e8682af6 not found: ID does not exist" containerID="8226ae0bddef3b2ba00ae57f90dc81a0d0635b1a410c23d88f9acdb5e8682af6" Dec 08 17:45:48 crc kubenswrapper[5118]: I1208 17:45:48.320488 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8226ae0bddef3b2ba00ae57f90dc81a0d0635b1a410c23d88f9acdb5e8682af6"} err="failed to get container status \"8226ae0bddef3b2ba00ae57f90dc81a0d0635b1a410c23d88f9acdb5e8682af6\": rpc error: code = NotFound desc = could not find container \"8226ae0bddef3b2ba00ae57f90dc81a0d0635b1a410c23d88f9acdb5e8682af6\": container with ID starting with 8226ae0bddef3b2ba00ae57f90dc81a0d0635b1a410c23d88f9acdb5e8682af6 not found: ID does not exist" Dec 08 17:45:48 crc kubenswrapper[5118]: I1208 17:45:48.344499 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-ztdrc"] Dec 08 17:45:48 crc kubenswrapper[5118]: I1208 17:45:48.354348 5118 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-ztdrc"] Dec 08 17:45:49 crc kubenswrapper[5118]: I1208 17:45:49.288068 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-57ffdf54dd-5dg99" event={"ID":"0c242c34-d446-4428-b8d7-0b8dbf4137c9","Type":"ContainerStarted","Data":"25d99fe0c28645602e473053563220d8a0dc628cf6410559f35325dbc6ebc918"} Dec 08 17:45:49 crc kubenswrapper[5118]: I1208 17:45:49.289352 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-authentication/oauth-openshift-57ffdf54dd-5dg99" Dec 08 17:45:49 crc kubenswrapper[5118]: I1208 17:45:49.296358 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-57ffdf54dd-5dg99" Dec 08 17:45:49 crc kubenswrapper[5118]: I1208 17:45:49.323286 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-57ffdf54dd-5dg99" podStartSLOduration=27.323260967 podStartE2EDuration="27.323260967s" podCreationTimestamp="2025-12-08 17:45:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:45:49.317241256 +0000 UTC m=+206.218565390" watchObservedRunningTime="2025-12-08 17:45:49.323260967 +0000 UTC m=+206.224585101" Dec 08 17:45:49 crc kubenswrapper[5118]: I1208 17:45:49.434597 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e" path="/var/lib/kubelet/pods/9bdb30d2-8f69-4d2d-9bf1-3bc70f85369e/volumes" Dec 08 17:45:50 crc kubenswrapper[5118]: I1208 17:45:50.231851 5118 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 08 17:45:50 crc kubenswrapper[5118]: I1208 17:45:50.239760 5118 kubelet.go:2547] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 08 17:45:50 crc kubenswrapper[5118]: I1208 17:45:50.239827 5118 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 08 17:45:50 crc kubenswrapper[5118]: I1208 17:45:50.239966 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 17:45:50 crc kubenswrapper[5118]: I1208 17:45:50.240228 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" containerID="cri-o://88c46ed55f61960077efe0009a715acb46c877307a6d5e8d2bbb1b1c940351c8" gracePeriod=15 Dec 08 17:45:50 crc kubenswrapper[5118]: I1208 17:45:50.240307 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://707ed6c728069793a2ce86dba7bbe414fa9c0bad4f7b3abf19fa593aeefd207e" gracePeriod=15 Dec 08 17:45:50 crc kubenswrapper[5118]: I1208 17:45:50.240380 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" containerID="cri-o://e46a7d797e71812da483916f6a5d5f9c04a83282a920a12bf84ab33b81c72425" gracePeriod=15 Dec 08 17:45:50 crc kubenswrapper[5118]: I1208 17:45:50.240379 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://6fb32b04b12f0a1150681226f15f28429f8eb6bd7fa0a3b9d55412dc59619957" gracePeriod=15 Dec 08 17:45:50 crc kubenswrapper[5118]: I1208 17:45:50.240459 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" containerID="cri-o://bc59dd30d995a3a8d9f31662930ff4062771ef2481abe3cf3883e943a04fb307" gracePeriod=15 Dec 08 17:45:50 crc kubenswrapper[5118]: I1208 17:45:50.241108 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Dec 08 17:45:50 crc kubenswrapper[5118]: I1208 17:45:50.241161 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Dec 08 17:45:50 crc kubenswrapper[5118]: I1208 17:45:50.241177 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Dec 08 17:45:50 crc kubenswrapper[5118]: I1208 17:45:50.241185 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Dec 08 17:45:50 crc kubenswrapper[5118]: I1208 17:45:50.241193 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 08 17:45:50 crc kubenswrapper[5118]: I1208 17:45:50.241201 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 08 17:45:50 crc kubenswrapper[5118]: I1208 17:45:50.241347 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="setup" Dec 08 17:45:50 crc kubenswrapper[5118]: I1208 17:45:50.241356 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="setup" Dec 08 17:45:50 crc kubenswrapper[5118]: I1208 17:45:50.241366 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Dec 08 17:45:50 crc kubenswrapper[5118]: I1208 17:45:50.241372 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Dec 08 17:45:50 crc kubenswrapper[5118]: I1208 17:45:50.242160 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 08 17:45:50 crc kubenswrapper[5118]: I1208 17:45:50.242173 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 08 17:45:50 crc kubenswrapper[5118]: I1208 17:45:50.242187 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 08 17:45:50 crc kubenswrapper[5118]: I1208 17:45:50.242192 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 08 17:45:50 crc kubenswrapper[5118]: I1208 17:45:50.242228 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Dec 08 17:45:50 crc kubenswrapper[5118]: I1208 17:45:50.242237 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Dec 08 17:45:50 crc kubenswrapper[5118]: I1208 17:45:50.242366 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 08 17:45:50 crc kubenswrapper[5118]: I1208 17:45:50.242378 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Dec 08 17:45:50 crc kubenswrapper[5118]: I1208 17:45:50.242386 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 08 17:45:50 crc kubenswrapper[5118]: I1208 17:45:50.242394 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 08 17:45:50 crc kubenswrapper[5118]: I1208 17:45:50.242402 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Dec 08 17:45:50 crc kubenswrapper[5118]: I1208 17:45:50.242417 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 08 17:45:50 crc kubenswrapper[5118]: I1208 17:45:50.242425 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Dec 08 17:45:50 crc kubenswrapper[5118]: I1208 17:45:50.242436 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Dec 08 17:45:50 crc kubenswrapper[5118]: I1208 17:45:50.242557 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 08 17:45:50 crc kubenswrapper[5118]: I1208 17:45:50.242571 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 08 17:45:50 crc kubenswrapper[5118]: I1208 17:45:50.242680 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 08 17:45:50 crc kubenswrapper[5118]: I1208 17:45:50.242793 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 08 17:45:50 crc kubenswrapper[5118]: I1208 17:45:50.242802 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 08 17:45:50 crc kubenswrapper[5118]: I1208 17:45:50.246472 5118 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="3a14caf222afb62aaabdc47808b6f944" podUID="57755cc5f99000cc11e193051474d4e2" Dec 08 17:45:50 crc kubenswrapper[5118]: I1208 17:45:50.263888 5118 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 17:45:50 crc kubenswrapper[5118]: I1208 17:45:50.275795 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 08 17:45:50 crc kubenswrapper[5118]: I1208 17:45:50.395099 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:45:50 crc kubenswrapper[5118]: I1208 17:45:50.395378 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:45:50 crc kubenswrapper[5118]: I1208 17:45:50.395417 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:45:50 crc kubenswrapper[5118]: I1208 17:45:50.395433 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 17:45:50 crc kubenswrapper[5118]: I1208 17:45:50.395480 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 17:45:50 crc kubenswrapper[5118]: I1208 17:45:50.395497 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 17:45:50 crc kubenswrapper[5118]: I1208 17:45:50.395545 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:45:50 crc kubenswrapper[5118]: I1208 17:45:50.395570 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 17:45:50 crc kubenswrapper[5118]: I1208 17:45:50.395584 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 17:45:50 crc kubenswrapper[5118]: I1208 17:45:50.395620 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:45:50 crc kubenswrapper[5118]: I1208 17:45:50.497103 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:45:50 crc kubenswrapper[5118]: I1208 17:45:50.497159 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 17:45:50 crc kubenswrapper[5118]: I1208 17:45:50.497219 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 17:45:50 crc kubenswrapper[5118]: I1208 17:45:50.497250 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 17:45:50 crc kubenswrapper[5118]: I1208 17:45:50.497342 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:45:50 crc kubenswrapper[5118]: I1208 17:45:50.497376 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 17:45:50 crc kubenswrapper[5118]: I1208 17:45:50.497409 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 17:45:50 crc kubenswrapper[5118]: I1208 17:45:50.497446 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:45:50 crc kubenswrapper[5118]: I1208 17:45:50.497571 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:45:50 crc kubenswrapper[5118]: I1208 17:45:50.497617 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:45:50 crc kubenswrapper[5118]: I1208 17:45:50.497730 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:45:50 crc kubenswrapper[5118]: I1208 17:45:50.497795 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:45:50 crc kubenswrapper[5118]: I1208 17:45:50.497843 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 17:45:50 crc kubenswrapper[5118]: I1208 17:45:50.497915 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 17:45:50 crc kubenswrapper[5118]: I1208 17:45:50.497977 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 17:45:50 crc kubenswrapper[5118]: I1208 17:45:50.498030 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 17:45:50 crc kubenswrapper[5118]: I1208 17:45:50.498036 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 17:45:50 crc kubenswrapper[5118]: I1208 17:45:50.498704 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:45:50 crc kubenswrapper[5118]: I1208 17:45:50.498749 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:45:50 crc kubenswrapper[5118]: I1208 17:45:50.498800 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:45:50 crc kubenswrapper[5118]: I1208 17:45:50.575722 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 17:45:50 crc kubenswrapper[5118]: W1208 17:45:50.597104 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf7dbc7e1ee9c187a863ef9b473fad27b.slice/crio-2f412456e1f05ab906ccf98d38822f65ebb3abf1f39c32bd9ad5edd974caaa0e WatchSource:0}: Error finding container 2f412456e1f05ab906ccf98d38822f65ebb3abf1f39c32bd9ad5edd974caaa0e: Status 404 returned error can't find the container with id 2f412456e1f05ab906ccf98d38822f65ebb3abf1f39c32bd9ad5edd974caaa0e Dec 08 17:45:50 crc kubenswrapper[5118]: E1208 17:45:50.603598 5118 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.243:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.187f4e8df788c20b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f7dbc7e1ee9c187a863ef9b473fad27b,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:45:50.602813963 +0000 UTC m=+207.504138107,LastTimestamp:2025-12-08 17:45:50.602813963 +0000 UTC m=+207.504138107,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:45:51 crc kubenswrapper[5118]: E1208 17:45:51.084965 5118 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.243:6443: connect: connection refused" Dec 08 17:45:51 crc kubenswrapper[5118]: E1208 17:45:51.085365 5118 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.243:6443: connect: connection refused" Dec 08 17:45:51 crc kubenswrapper[5118]: E1208 17:45:51.085758 5118 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.243:6443: connect: connection refused" Dec 08 17:45:51 crc kubenswrapper[5118]: E1208 17:45:51.085986 5118 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.243:6443: connect: connection refused" Dec 08 17:45:51 crc kubenswrapper[5118]: E1208 17:45:51.086479 5118 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.243:6443: connect: connection refused" Dec 08 17:45:51 crc kubenswrapper[5118]: I1208 17:45:51.086567 5118 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Dec 08 17:45:51 crc kubenswrapper[5118]: E1208 17:45:51.087274 5118 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.243:6443: connect: connection refused" interval="200ms" Dec 08 17:45:51 crc kubenswrapper[5118]: E1208 17:45:51.288221 5118 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.243:6443: connect: connection refused" interval="400ms" Dec 08 17:45:51 crc kubenswrapper[5118]: I1208 17:45:51.305316 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Dec 08 17:45:51 crc kubenswrapper[5118]: I1208 17:45:51.307464 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Dec 08 17:45:51 crc kubenswrapper[5118]: I1208 17:45:51.308275 5118 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="bc59dd30d995a3a8d9f31662930ff4062771ef2481abe3cf3883e943a04fb307" exitCode=0 Dec 08 17:45:51 crc kubenswrapper[5118]: I1208 17:45:51.308406 5118 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="707ed6c728069793a2ce86dba7bbe414fa9c0bad4f7b3abf19fa593aeefd207e" exitCode=0 Dec 08 17:45:51 crc kubenswrapper[5118]: I1208 17:45:51.308608 5118 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="6fb32b04b12f0a1150681226f15f28429f8eb6bd7fa0a3b9d55412dc59619957" exitCode=0 Dec 08 17:45:51 crc kubenswrapper[5118]: I1208 17:45:51.308711 5118 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="e46a7d797e71812da483916f6a5d5f9c04a83282a920a12bf84ab33b81c72425" exitCode=2 Dec 08 17:45:51 crc kubenswrapper[5118]: I1208 17:45:51.308362 5118 scope.go:117] "RemoveContainer" containerID="09a55b9dd89de217aa828b7f964664fff12b69580598e02e122e83d05b141077" Dec 08 17:45:51 crc kubenswrapper[5118]: I1208 17:45:51.311240 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f7dbc7e1ee9c187a863ef9b473fad27b","Type":"ContainerStarted","Data":"282e0270b82b4bc33057c7b45474f8c44521902b90a91947d4d7f5053f96bcc5"} Dec 08 17:45:51 crc kubenswrapper[5118]: I1208 17:45:51.311286 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f7dbc7e1ee9c187a863ef9b473fad27b","Type":"ContainerStarted","Data":"2f412456e1f05ab906ccf98d38822f65ebb3abf1f39c32bd9ad5edd974caaa0e"} Dec 08 17:45:51 crc kubenswrapper[5118]: I1208 17:45:51.313594 5118 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.243:6443: connect: connection refused" Dec 08 17:45:51 crc kubenswrapper[5118]: I1208 17:45:51.314202 5118 generic.go:358] "Generic (PLEG): container finished" podID="158725bd-7556-4281-a3cb-acaa6baf5d8c" containerID="509952fab5a13bc2060cdb8baf4a28712771b3cbd148240cee05b4f9cff4bcc8" exitCode=0 Dec 08 17:45:51 crc kubenswrapper[5118]: I1208 17:45:51.314276 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"158725bd-7556-4281-a3cb-acaa6baf5d8c","Type":"ContainerDied","Data":"509952fab5a13bc2060cdb8baf4a28712771b3cbd148240cee05b4f9cff4bcc8"} Dec 08 17:45:51 crc kubenswrapper[5118]: I1208 17:45:51.315586 5118 status_manager.go:895] "Failed to get status for pod" podUID="158725bd-7556-4281-a3cb-acaa6baf5d8c" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.243:6443: connect: connection refused" Dec 08 17:45:51 crc kubenswrapper[5118]: I1208 17:45:51.316089 5118 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.243:6443: connect: connection refused" Dec 08 17:45:51 crc kubenswrapper[5118]: E1208 17:45:51.689982 5118 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.243:6443: connect: connection refused" interval="800ms" Dec 08 17:45:52 crc kubenswrapper[5118]: I1208 17:45:52.379088 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Dec 08 17:45:52 crc kubenswrapper[5118]: E1208 17:45:52.483132 5118 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:45:52Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:45:52Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:45:52Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:45:52Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.243:6443: connect: connection refused" Dec 08 17:45:52 crc kubenswrapper[5118]: E1208 17:45:52.483764 5118 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.243:6443: connect: connection refused" Dec 08 17:45:52 crc kubenswrapper[5118]: E1208 17:45:52.484155 5118 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.243:6443: connect: connection refused" Dec 08 17:45:52 crc kubenswrapper[5118]: E1208 17:45:52.484341 5118 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.243:6443: connect: connection refused" Dec 08 17:45:52 crc kubenswrapper[5118]: E1208 17:45:52.484515 5118 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.243:6443: connect: connection refused" Dec 08 17:45:52 crc kubenswrapper[5118]: E1208 17:45:52.484532 5118 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Dec 08 17:45:52 crc kubenswrapper[5118]: E1208 17:45:52.491596 5118 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.243:6443: connect: connection refused" interval="1.6s" Dec 08 17:45:52 crc kubenswrapper[5118]: I1208 17:45:52.657599 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Dec 08 17:45:52 crc kubenswrapper[5118]: I1208 17:45:52.658293 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:45:52 crc kubenswrapper[5118]: I1208 17:45:52.658966 5118 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.243:6443: connect: connection refused" Dec 08 17:45:52 crc kubenswrapper[5118]: I1208 17:45:52.659207 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Dec 08 17:45:52 crc kubenswrapper[5118]: I1208 17:45:52.659405 5118 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.243:6443: connect: connection refused" Dec 08 17:45:52 crc kubenswrapper[5118]: I1208 17:45:52.659670 5118 status_manager.go:895] "Failed to get status for pod" podUID="158725bd-7556-4281-a3cb-acaa6baf5d8c" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.243:6443: connect: connection refused" Dec 08 17:45:52 crc kubenswrapper[5118]: I1208 17:45:52.660127 5118 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.243:6443: connect: connection refused" Dec 08 17:45:52 crc kubenswrapper[5118]: I1208 17:45:52.660532 5118 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.243:6443: connect: connection refused" Dec 08 17:45:52 crc kubenswrapper[5118]: I1208 17:45:52.660797 5118 status_manager.go:895] "Failed to get status for pod" podUID="158725bd-7556-4281-a3cb-acaa6baf5d8c" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.243:6443: connect: connection refused" Dec 08 17:45:52 crc kubenswrapper[5118]: I1208 17:45:52.830522 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Dec 08 17:45:52 crc kubenswrapper[5118]: I1208 17:45:52.830722 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/158725bd-7556-4281-a3cb-acaa6baf5d8c-kubelet-dir\") pod \"158725bd-7556-4281-a3cb-acaa6baf5d8c\" (UID: \"158725bd-7556-4281-a3cb-acaa6baf5d8c\") " Dec 08 17:45:52 crc kubenswrapper[5118]: I1208 17:45:52.830727 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 17:45:52 crc kubenswrapper[5118]: I1208 17:45:52.830768 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Dec 08 17:45:52 crc kubenswrapper[5118]: I1208 17:45:52.830824 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Dec 08 17:45:52 crc kubenswrapper[5118]: I1208 17:45:52.830984 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/158725bd-7556-4281-a3cb-acaa6baf5d8c-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "158725bd-7556-4281-a3cb-acaa6baf5d8c" (UID: "158725bd-7556-4281-a3cb-acaa6baf5d8c"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 17:45:52 crc kubenswrapper[5118]: I1208 17:45:52.831021 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/158725bd-7556-4281-a3cb-acaa6baf5d8c-var-lock\") pod \"158725bd-7556-4281-a3cb-acaa6baf5d8c\" (UID: \"158725bd-7556-4281-a3cb-acaa6baf5d8c\") " Dec 08 17:45:52 crc kubenswrapper[5118]: I1208 17:45:52.831048 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Dec 08 17:45:52 crc kubenswrapper[5118]: I1208 17:45:52.831116 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 17:45:52 crc kubenswrapper[5118]: I1208 17:45:52.831118 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/158725bd-7556-4281-a3cb-acaa6baf5d8c-var-lock" (OuterVolumeSpecName: "var-lock") pod "158725bd-7556-4281-a3cb-acaa6baf5d8c" (UID: "158725bd-7556-4281-a3cb-acaa6baf5d8c"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 17:45:52 crc kubenswrapper[5118]: I1208 17:45:52.831180 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/158725bd-7556-4281-a3cb-acaa6baf5d8c-kube-api-access\") pod \"158725bd-7556-4281-a3cb-acaa6baf5d8c\" (UID: \"158725bd-7556-4281-a3cb-acaa6baf5d8c\") " Dec 08 17:45:52 crc kubenswrapper[5118]: I1208 17:45:52.831236 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Dec 08 17:45:52 crc kubenswrapper[5118]: I1208 17:45:52.831387 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 17:45:52 crc kubenswrapper[5118]: I1208 17:45:52.831728 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir" (OuterVolumeSpecName: "ca-bundle-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "ca-bundle-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:45:52 crc kubenswrapper[5118]: I1208 17:45:52.832082 5118 reconciler_common.go:299] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/158725bd-7556-4281-a3cb-acaa6baf5d8c-var-lock\") on node \"crc\" DevicePath \"\"" Dec 08 17:45:52 crc kubenswrapper[5118]: I1208 17:45:52.832109 5118 reconciler_common.go:299] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") on node \"crc\" DevicePath \"\"" Dec 08 17:45:52 crc kubenswrapper[5118]: I1208 17:45:52.832126 5118 reconciler_common.go:299] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 08 17:45:52 crc kubenswrapper[5118]: I1208 17:45:52.832142 5118 reconciler_common.go:299] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 08 17:45:52 crc kubenswrapper[5118]: I1208 17:45:52.832159 5118 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/158725bd-7556-4281-a3cb-acaa6baf5d8c-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 08 17:45:52 crc kubenswrapper[5118]: I1208 17:45:52.832175 5118 reconciler_common.go:299] "Volume detached for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") on node \"crc\" DevicePath \"\"" Dec 08 17:45:52 crc kubenswrapper[5118]: I1208 17:45:52.836865 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:45:52 crc kubenswrapper[5118]: I1208 17:45:52.840229 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/158725bd-7556-4281-a3cb-acaa6baf5d8c-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "158725bd-7556-4281-a3cb-acaa6baf5d8c" (UID: "158725bd-7556-4281-a3cb-acaa6baf5d8c"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:45:52 crc kubenswrapper[5118]: I1208 17:45:52.934988 5118 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") on node \"crc\" DevicePath \"\"" Dec 08 17:45:52 crc kubenswrapper[5118]: I1208 17:45:52.935062 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/158725bd-7556-4281-a3cb-acaa6baf5d8c-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 08 17:45:53 crc kubenswrapper[5118]: I1208 17:45:53.396145 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Dec 08 17:45:53 crc kubenswrapper[5118]: I1208 17:45:53.397367 5118 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="88c46ed55f61960077efe0009a715acb46c877307a6d5e8d2bbb1b1c940351c8" exitCode=0 Dec 08 17:45:53 crc kubenswrapper[5118]: I1208 17:45:53.397506 5118 scope.go:117] "RemoveContainer" containerID="bc59dd30d995a3a8d9f31662930ff4062771ef2481abe3cf3883e943a04fb307" Dec 08 17:45:53 crc kubenswrapper[5118]: I1208 17:45:53.397511 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:45:53 crc kubenswrapper[5118]: I1208 17:45:53.401280 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"158725bd-7556-4281-a3cb-acaa6baf5d8c","Type":"ContainerDied","Data":"b8a864a71dfd0d83f1246550bc2ef6c6048487044c12c63e1738f0ca50d342f8"} Dec 08 17:45:53 crc kubenswrapper[5118]: I1208 17:45:53.401433 5118 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b8a864a71dfd0d83f1246550bc2ef6c6048487044c12c63e1738f0ca50d342f8" Dec 08 17:45:53 crc kubenswrapper[5118]: I1208 17:45:53.401383 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Dec 08 17:45:53 crc kubenswrapper[5118]: I1208 17:45:53.422689 5118 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.243:6443: connect: connection refused" Dec 08 17:45:53 crc kubenswrapper[5118]: I1208 17:45:53.423112 5118 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.243:6443: connect: connection refused" Dec 08 17:45:53 crc kubenswrapper[5118]: I1208 17:45:53.423317 5118 scope.go:117] "RemoveContainer" containerID="707ed6c728069793a2ce86dba7bbe414fa9c0bad4f7b3abf19fa593aeefd207e" Dec 08 17:45:53 crc kubenswrapper[5118]: I1208 17:45:53.423850 5118 status_manager.go:895] "Failed to get status for pod" podUID="158725bd-7556-4281-a3cb-acaa6baf5d8c" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.243:6443: connect: connection refused" Dec 08 17:45:53 crc kubenswrapper[5118]: I1208 17:45:53.431629 5118 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.243:6443: connect: connection refused" Dec 08 17:45:53 crc kubenswrapper[5118]: I1208 17:45:53.432210 5118 status_manager.go:895] "Failed to get status for pod" podUID="158725bd-7556-4281-a3cb-acaa6baf5d8c" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.243:6443: connect: connection refused" Dec 08 17:45:53 crc kubenswrapper[5118]: I1208 17:45:53.433374 5118 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.243:6443: connect: connection refused" Dec 08 17:45:53 crc kubenswrapper[5118]: I1208 17:45:53.433847 5118 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.243:6443: connect: connection refused" Dec 08 17:45:53 crc kubenswrapper[5118]: I1208 17:45:53.434261 5118 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.243:6443: connect: connection refused" Dec 08 17:45:53 crc kubenswrapper[5118]: I1208 17:45:53.434734 5118 status_manager.go:895] "Failed to get status for pod" podUID="158725bd-7556-4281-a3cb-acaa6baf5d8c" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.243:6443: connect: connection refused" Dec 08 17:45:53 crc kubenswrapper[5118]: I1208 17:45:53.436840 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a14caf222afb62aaabdc47808b6f944" path="/var/lib/kubelet/pods/3a14caf222afb62aaabdc47808b6f944/volumes" Dec 08 17:45:53 crc kubenswrapper[5118]: I1208 17:45:53.443444 5118 scope.go:117] "RemoveContainer" containerID="6fb32b04b12f0a1150681226f15f28429f8eb6bd7fa0a3b9d55412dc59619957" Dec 08 17:45:53 crc kubenswrapper[5118]: I1208 17:45:53.460805 5118 scope.go:117] "RemoveContainer" containerID="e46a7d797e71812da483916f6a5d5f9c04a83282a920a12bf84ab33b81c72425" Dec 08 17:45:53 crc kubenswrapper[5118]: I1208 17:45:53.476998 5118 scope.go:117] "RemoveContainer" containerID="88c46ed55f61960077efe0009a715acb46c877307a6d5e8d2bbb1b1c940351c8" Dec 08 17:45:53 crc kubenswrapper[5118]: I1208 17:45:53.495407 5118 scope.go:117] "RemoveContainer" containerID="79d28781c46437a1fa8bbb18bad40812f011e8b4b26403d391ebe33b2f638fce" Dec 08 17:45:53 crc kubenswrapper[5118]: I1208 17:45:53.554676 5118 scope.go:117] "RemoveContainer" containerID="bc59dd30d995a3a8d9f31662930ff4062771ef2481abe3cf3883e943a04fb307" Dec 08 17:45:53 crc kubenswrapper[5118]: E1208 17:45:53.555507 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc59dd30d995a3a8d9f31662930ff4062771ef2481abe3cf3883e943a04fb307\": container with ID starting with bc59dd30d995a3a8d9f31662930ff4062771ef2481abe3cf3883e943a04fb307 not found: ID does not exist" containerID="bc59dd30d995a3a8d9f31662930ff4062771ef2481abe3cf3883e943a04fb307" Dec 08 17:45:53 crc kubenswrapper[5118]: I1208 17:45:53.555550 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc59dd30d995a3a8d9f31662930ff4062771ef2481abe3cf3883e943a04fb307"} err="failed to get container status \"bc59dd30d995a3a8d9f31662930ff4062771ef2481abe3cf3883e943a04fb307\": rpc error: code = NotFound desc = could not find container \"bc59dd30d995a3a8d9f31662930ff4062771ef2481abe3cf3883e943a04fb307\": container with ID starting with bc59dd30d995a3a8d9f31662930ff4062771ef2481abe3cf3883e943a04fb307 not found: ID does not exist" Dec 08 17:45:53 crc kubenswrapper[5118]: I1208 17:45:53.555589 5118 scope.go:117] "RemoveContainer" containerID="707ed6c728069793a2ce86dba7bbe414fa9c0bad4f7b3abf19fa593aeefd207e" Dec 08 17:45:53 crc kubenswrapper[5118]: E1208 17:45:53.555933 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"707ed6c728069793a2ce86dba7bbe414fa9c0bad4f7b3abf19fa593aeefd207e\": container with ID starting with 707ed6c728069793a2ce86dba7bbe414fa9c0bad4f7b3abf19fa593aeefd207e not found: ID does not exist" containerID="707ed6c728069793a2ce86dba7bbe414fa9c0bad4f7b3abf19fa593aeefd207e" Dec 08 17:45:53 crc kubenswrapper[5118]: I1208 17:45:53.555969 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"707ed6c728069793a2ce86dba7bbe414fa9c0bad4f7b3abf19fa593aeefd207e"} err="failed to get container status \"707ed6c728069793a2ce86dba7bbe414fa9c0bad4f7b3abf19fa593aeefd207e\": rpc error: code = NotFound desc = could not find container \"707ed6c728069793a2ce86dba7bbe414fa9c0bad4f7b3abf19fa593aeefd207e\": container with ID starting with 707ed6c728069793a2ce86dba7bbe414fa9c0bad4f7b3abf19fa593aeefd207e not found: ID does not exist" Dec 08 17:45:53 crc kubenswrapper[5118]: I1208 17:45:53.555996 5118 scope.go:117] "RemoveContainer" containerID="6fb32b04b12f0a1150681226f15f28429f8eb6bd7fa0a3b9d55412dc59619957" Dec 08 17:45:53 crc kubenswrapper[5118]: E1208 17:45:53.556421 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6fb32b04b12f0a1150681226f15f28429f8eb6bd7fa0a3b9d55412dc59619957\": container with ID starting with 6fb32b04b12f0a1150681226f15f28429f8eb6bd7fa0a3b9d55412dc59619957 not found: ID does not exist" containerID="6fb32b04b12f0a1150681226f15f28429f8eb6bd7fa0a3b9d55412dc59619957" Dec 08 17:45:53 crc kubenswrapper[5118]: I1208 17:45:53.556444 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6fb32b04b12f0a1150681226f15f28429f8eb6bd7fa0a3b9d55412dc59619957"} err="failed to get container status \"6fb32b04b12f0a1150681226f15f28429f8eb6bd7fa0a3b9d55412dc59619957\": rpc error: code = NotFound desc = could not find container \"6fb32b04b12f0a1150681226f15f28429f8eb6bd7fa0a3b9d55412dc59619957\": container with ID starting with 6fb32b04b12f0a1150681226f15f28429f8eb6bd7fa0a3b9d55412dc59619957 not found: ID does not exist" Dec 08 17:45:53 crc kubenswrapper[5118]: I1208 17:45:53.556462 5118 scope.go:117] "RemoveContainer" containerID="e46a7d797e71812da483916f6a5d5f9c04a83282a920a12bf84ab33b81c72425" Dec 08 17:45:53 crc kubenswrapper[5118]: E1208 17:45:53.556997 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e46a7d797e71812da483916f6a5d5f9c04a83282a920a12bf84ab33b81c72425\": container with ID starting with e46a7d797e71812da483916f6a5d5f9c04a83282a920a12bf84ab33b81c72425 not found: ID does not exist" containerID="e46a7d797e71812da483916f6a5d5f9c04a83282a920a12bf84ab33b81c72425" Dec 08 17:45:53 crc kubenswrapper[5118]: I1208 17:45:53.557047 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e46a7d797e71812da483916f6a5d5f9c04a83282a920a12bf84ab33b81c72425"} err="failed to get container status \"e46a7d797e71812da483916f6a5d5f9c04a83282a920a12bf84ab33b81c72425\": rpc error: code = NotFound desc = could not find container \"e46a7d797e71812da483916f6a5d5f9c04a83282a920a12bf84ab33b81c72425\": container with ID starting with e46a7d797e71812da483916f6a5d5f9c04a83282a920a12bf84ab33b81c72425 not found: ID does not exist" Dec 08 17:45:53 crc kubenswrapper[5118]: I1208 17:45:53.557131 5118 scope.go:117] "RemoveContainer" containerID="88c46ed55f61960077efe0009a715acb46c877307a6d5e8d2bbb1b1c940351c8" Dec 08 17:45:53 crc kubenswrapper[5118]: E1208 17:45:53.557559 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88c46ed55f61960077efe0009a715acb46c877307a6d5e8d2bbb1b1c940351c8\": container with ID starting with 88c46ed55f61960077efe0009a715acb46c877307a6d5e8d2bbb1b1c940351c8 not found: ID does not exist" containerID="88c46ed55f61960077efe0009a715acb46c877307a6d5e8d2bbb1b1c940351c8" Dec 08 17:45:53 crc kubenswrapper[5118]: I1208 17:45:53.557592 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88c46ed55f61960077efe0009a715acb46c877307a6d5e8d2bbb1b1c940351c8"} err="failed to get container status \"88c46ed55f61960077efe0009a715acb46c877307a6d5e8d2bbb1b1c940351c8\": rpc error: code = NotFound desc = could not find container \"88c46ed55f61960077efe0009a715acb46c877307a6d5e8d2bbb1b1c940351c8\": container with ID starting with 88c46ed55f61960077efe0009a715acb46c877307a6d5e8d2bbb1b1c940351c8 not found: ID does not exist" Dec 08 17:45:53 crc kubenswrapper[5118]: I1208 17:45:53.557611 5118 scope.go:117] "RemoveContainer" containerID="79d28781c46437a1fa8bbb18bad40812f011e8b4b26403d391ebe33b2f638fce" Dec 08 17:45:53 crc kubenswrapper[5118]: E1208 17:45:53.557938 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79d28781c46437a1fa8bbb18bad40812f011e8b4b26403d391ebe33b2f638fce\": container with ID starting with 79d28781c46437a1fa8bbb18bad40812f011e8b4b26403d391ebe33b2f638fce not found: ID does not exist" containerID="79d28781c46437a1fa8bbb18bad40812f011e8b4b26403d391ebe33b2f638fce" Dec 08 17:45:53 crc kubenswrapper[5118]: I1208 17:45:53.557965 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79d28781c46437a1fa8bbb18bad40812f011e8b4b26403d391ebe33b2f638fce"} err="failed to get container status \"79d28781c46437a1fa8bbb18bad40812f011e8b4b26403d391ebe33b2f638fce\": rpc error: code = NotFound desc = could not find container \"79d28781c46437a1fa8bbb18bad40812f011e8b4b26403d391ebe33b2f638fce\": container with ID starting with 79d28781c46437a1fa8bbb18bad40812f011e8b4b26403d391ebe33b2f638fce not found: ID does not exist" Dec 08 17:45:54 crc kubenswrapper[5118]: E1208 17:45:54.093391 5118 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.243:6443: connect: connection refused" interval="3.2s" Dec 08 17:45:57 crc kubenswrapper[5118]: E1208 17:45:57.157834 5118 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.243:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.187f4e8df788c20b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f7dbc7e1ee9c187a863ef9b473fad27b,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:45:50.602813963 +0000 UTC m=+207.504138107,LastTimestamp:2025-12-08 17:45:50.602813963 +0000 UTC m=+207.504138107,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:45:57 crc kubenswrapper[5118]: E1208 17:45:57.295117 5118 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.243:6443: connect: connection refused" interval="6.4s" Dec 08 17:46:02 crc kubenswrapper[5118]: I1208 17:46:02.427462 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:46:02 crc kubenswrapper[5118]: I1208 17:46:02.429221 5118 status_manager.go:895] "Failed to get status for pod" podUID="158725bd-7556-4281-a3cb-acaa6baf5d8c" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.243:6443: connect: connection refused" Dec 08 17:46:02 crc kubenswrapper[5118]: I1208 17:46:02.429663 5118 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.243:6443: connect: connection refused" Dec 08 17:46:02 crc kubenswrapper[5118]: I1208 17:46:02.462379 5118 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="264185e3-872b-4c02-a81a-b4ed66da2e56" Dec 08 17:46:02 crc kubenswrapper[5118]: I1208 17:46:02.462434 5118 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="264185e3-872b-4c02-a81a-b4ed66da2e56" Dec 08 17:46:02 crc kubenswrapper[5118]: E1208 17:46:02.463248 5118 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.243:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:46:02 crc kubenswrapper[5118]: I1208 17:46:02.463597 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:46:02 crc kubenswrapper[5118]: W1208 17:46:02.490579 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod57755cc5f99000cc11e193051474d4e2.slice/crio-ff9b500693f86d7b5b738de2ec6f84016484203e4d63beefcc521d60a18c4e55 WatchSource:0}: Error finding container ff9b500693f86d7b5b738de2ec6f84016484203e4d63beefcc521d60a18c4e55: Status 404 returned error can't find the container with id ff9b500693f86d7b5b738de2ec6f84016484203e4d63beefcc521d60a18c4e55 Dec 08 17:46:03 crc kubenswrapper[5118]: I1208 17:46:03.439275 5118 status_manager.go:895] "Failed to get status for pod" podUID="57755cc5f99000cc11e193051474d4e2" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.243:6443: connect: connection refused" Dec 08 17:46:03 crc kubenswrapper[5118]: I1208 17:46:03.439923 5118 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.243:6443: connect: connection refused" Dec 08 17:46:03 crc kubenswrapper[5118]: I1208 17:46:03.440512 5118 status_manager.go:895] "Failed to get status for pod" podUID="158725bd-7556-4281-a3cb-acaa6baf5d8c" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.243:6443: connect: connection refused" Dec 08 17:46:03 crc kubenswrapper[5118]: I1208 17:46:03.476738 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"89ef0331f8862c254be601ce56f8d819bf290e2f14f61f5abf4fa9e123f332ba"} Dec 08 17:46:03 crc kubenswrapper[5118]: I1208 17:46:03.476817 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"ff9b500693f86d7b5b738de2ec6f84016484203e4d63beefcc521d60a18c4e55"} Dec 08 17:46:03 crc kubenswrapper[5118]: I1208 17:46:03.477457 5118 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="264185e3-872b-4c02-a81a-b4ed66da2e56" Dec 08 17:46:03 crc kubenswrapper[5118]: I1208 17:46:03.477490 5118 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="264185e3-872b-4c02-a81a-b4ed66da2e56" Dec 08 17:46:03 crc kubenswrapper[5118]: E1208 17:46:03.478084 5118 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.243:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:46:03 crc kubenswrapper[5118]: I1208 17:46:03.478083 5118 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.243:6443: connect: connection refused" Dec 08 17:46:03 crc kubenswrapper[5118]: I1208 17:46:03.478834 5118 status_manager.go:895] "Failed to get status for pod" podUID="158725bd-7556-4281-a3cb-acaa6baf5d8c" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.243:6443: connect: connection refused" Dec 08 17:46:03 crc kubenswrapper[5118]: I1208 17:46:03.479151 5118 status_manager.go:895] "Failed to get status for pod" podUID="57755cc5f99000cc11e193051474d4e2" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.243:6443: connect: connection refused" Dec 08 17:46:03 crc kubenswrapper[5118]: E1208 17:46:03.696592 5118 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.243:6443: connect: connection refused" interval="7s" Dec 08 17:46:04 crc kubenswrapper[5118]: I1208 17:46:04.486218 5118 generic.go:358] "Generic (PLEG): container finished" podID="57755cc5f99000cc11e193051474d4e2" containerID="89ef0331f8862c254be601ce56f8d819bf290e2f14f61f5abf4fa9e123f332ba" exitCode=0 Dec 08 17:46:04 crc kubenswrapper[5118]: I1208 17:46:04.486304 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerDied","Data":"89ef0331f8862c254be601ce56f8d819bf290e2f14f61f5abf4fa9e123f332ba"} Dec 08 17:46:04 crc kubenswrapper[5118]: I1208 17:46:04.486357 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"67f689df3c5818240817a524f71346b388d29bb4a99f6ba3178703a9c8edac8a"} Dec 08 17:46:04 crc kubenswrapper[5118]: I1208 17:46:04.486367 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"49946d2d40e798bd89dda2f1e41e5fe81b105f431423a70ace4082c566de5f14"} Dec 08 17:46:04 crc kubenswrapper[5118]: I1208 17:46:04.486376 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"337c84bc262e4b2b6758e1d0a715aa833ae1db1c7c9953b0d920f730c382803f"} Dec 08 17:46:05 crc kubenswrapper[5118]: I1208 17:46:05.424858 5118 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Dec 08 17:46:05 crc kubenswrapper[5118]: I1208 17:46:05.425242 5118 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Dec 08 17:46:05 crc kubenswrapper[5118]: I1208 17:46:05.509784 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 08 17:46:05 crc kubenswrapper[5118]: I1208 17:46:05.509845 5118 generic.go:358] "Generic (PLEG): container finished" podID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerID="cfca8d494212b21c8a44513c5dd06e44549b08479d7bf1138bd5fb15936ccee8" exitCode=1 Dec 08 17:46:05 crc kubenswrapper[5118]: I1208 17:46:05.510050 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerDied","Data":"cfca8d494212b21c8a44513c5dd06e44549b08479d7bf1138bd5fb15936ccee8"} Dec 08 17:46:05 crc kubenswrapper[5118]: I1208 17:46:05.511051 5118 scope.go:117] "RemoveContainer" containerID="cfca8d494212b21c8a44513c5dd06e44549b08479d7bf1138bd5fb15936ccee8" Dec 08 17:46:05 crc kubenswrapper[5118]: I1208 17:46:05.513497 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"16553863c0c13133794c80200a96c7521a3a1895972cafb101263ac5c4f59ecc"} Dec 08 17:46:05 crc kubenswrapper[5118]: I1208 17:46:05.513542 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"2d2cc5c4562197e598d8a92d477dc42f2ef6a00daa4d7d7004266c4a899da085"} Dec 08 17:46:05 crc kubenswrapper[5118]: I1208 17:46:05.513764 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:46:05 crc kubenswrapper[5118]: I1208 17:46:05.513855 5118 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="264185e3-872b-4c02-a81a-b4ed66da2e56" Dec 08 17:46:05 crc kubenswrapper[5118]: I1208 17:46:05.513894 5118 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="264185e3-872b-4c02-a81a-b4ed66da2e56" Dec 08 17:46:06 crc kubenswrapper[5118]: I1208 17:46:06.521064 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 08 17:46:06 crc kubenswrapper[5118]: I1208 17:46:06.521590 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"77dde1c45fb3930d55762566cc80737f8bfecc6cd544e4481944dc146ae105d9"} Dec 08 17:46:07 crc kubenswrapper[5118]: I1208 17:46:07.463772 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:46:07 crc kubenswrapper[5118]: I1208 17:46:07.463838 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:46:07 crc kubenswrapper[5118]: I1208 17:46:07.473090 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:46:10 crc kubenswrapper[5118]: I1208 17:46:10.531936 5118 kubelet.go:3329] "Deleted mirror pod as it didn't match the static Pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:46:10 crc kubenswrapper[5118]: I1208 17:46:10.532631 5118 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:46:10 crc kubenswrapper[5118]: I1208 17:46:10.569534 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"264185e3-872b-4c02-a81a-b4ed66da2e56\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:46:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:46:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-cert-syncer kube-apiserver-cert-regeneration-controller kube-apiserver-insecure-readyz kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-cert-syncer kube-apiserver-cert-regeneration-controller kube-apiserver-insecure-readyz kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://89ef0331f8862c254be601ce56f8d819bf290e2f14f61f5abf4fa9e123f332ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89ef0331f8862c254be601ce56f8d819bf290e2f14f61f5abf4fa9e123f332ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T17:46:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T17:46:03Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Pending\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Pod \"kube-apiserver-crc\" is invalid: metadata.uid: Invalid value: \"264185e3-872b-4c02-a81a-b4ed66da2e56\": field is immutable" Dec 08 17:46:10 crc kubenswrapper[5118]: I1208 17:46:10.670364 5118 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="57755cc5f99000cc11e193051474d4e2" podUID="45ca0aa6-6f35-4624-b58b-0a1a4b75ee4b" Dec 08 17:46:11 crc kubenswrapper[5118]: I1208 17:46:11.554184 5118 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="264185e3-872b-4c02-a81a-b4ed66da2e56" Dec 08 17:46:11 crc kubenswrapper[5118]: I1208 17:46:11.555015 5118 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="264185e3-872b-4c02-a81a-b4ed66da2e56" Dec 08 17:46:11 crc kubenswrapper[5118]: I1208 17:46:11.560787 5118 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="57755cc5f99000cc11e193051474d4e2" podUID="45ca0aa6-6f35-4624-b58b-0a1a4b75ee4b" Dec 08 17:46:11 crc kubenswrapper[5118]: I1208 17:46:11.562711 5118 status_manager.go:346] "Container readiness changed before pod has synced" pod="openshift-kube-apiserver/kube-apiserver-crc" containerID="cri-o://337c84bc262e4b2b6758e1d0a715aa833ae1db1c7c9953b0d920f730c382803f" Dec 08 17:46:11 crc kubenswrapper[5118]: I1208 17:46:11.562738 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:46:11 crc kubenswrapper[5118]: I1208 17:46:11.887703 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 17:46:11 crc kubenswrapper[5118]: I1208 17:46:11.888267 5118 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Dec 08 17:46:11 crc kubenswrapper[5118]: I1208 17:46:11.888717 5118 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Dec 08 17:46:12 crc kubenswrapper[5118]: I1208 17:46:12.558999 5118 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="264185e3-872b-4c02-a81a-b4ed66da2e56" Dec 08 17:46:12 crc kubenswrapper[5118]: I1208 17:46:12.559028 5118 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="264185e3-872b-4c02-a81a-b4ed66da2e56" Dec 08 17:46:12 crc kubenswrapper[5118]: I1208 17:46:12.562262 5118 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="57755cc5f99000cc11e193051474d4e2" podUID="45ca0aa6-6f35-4624-b58b-0a1a4b75ee4b" Dec 08 17:46:15 crc kubenswrapper[5118]: I1208 17:46:15.424630 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 17:46:20 crc kubenswrapper[5118]: I1208 17:46:20.476166 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"openshift-service-ca.crt\"" Dec 08 17:46:21 crc kubenswrapper[5118]: I1208 17:46:21.330094 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"metrics-tls\"" Dec 08 17:46:21 crc kubenswrapper[5118]: I1208 17:46:21.887998 5118 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Dec 08 17:46:21 crc kubenswrapper[5118]: I1208 17:46:21.888122 5118 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Dec 08 17:46:21 crc kubenswrapper[5118]: I1208 17:46:21.974850 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"node-bootstrapper-token\"" Dec 08 17:46:22 crc kubenswrapper[5118]: I1208 17:46:22.368821 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-controller-dockercfg-xnj77\"" Dec 08 17:46:22 crc kubenswrapper[5118]: I1208 17:46:22.372503 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"trusted-ca-bundle\"" Dec 08 17:46:22 crc kubenswrapper[5118]: I1208 17:46:22.566065 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-mmcpt\"" Dec 08 17:46:22 crc kubenswrapper[5118]: I1208 17:46:22.613667 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-djmfg\"" Dec 08 17:46:22 crc kubenswrapper[5118]: I1208 17:46:22.948227 5118 reflector.go:430] "Caches populated" type="*v1.CSIDriver" reflector="k8s.io/client-go/informers/factory.go:160" Dec 08 17:46:22 crc kubenswrapper[5118]: I1208 17:46:22.985024 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-serving-cert\"" Dec 08 17:46:23 crc kubenswrapper[5118]: I1208 17:46:23.186332 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-t8n29\"" Dec 08 17:46:23 crc kubenswrapper[5118]: I1208 17:46:23.237028 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-dockercfg-jcmfj\"" Dec 08 17:46:23 crc kubenswrapper[5118]: I1208 17:46:23.313221 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"env-overrides\"" Dec 08 17:46:23 crc kubenswrapper[5118]: I1208 17:46:23.331142 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\"" Dec 08 17:46:23 crc kubenswrapper[5118]: I1208 17:46:23.332908 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-rbac-proxy\"" Dec 08 17:46:23 crc kubenswrapper[5118]: I1208 17:46:23.409093 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"kube-root-ca.crt\"" Dec 08 17:46:23 crc kubenswrapper[5118]: I1208 17:46:23.482906 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mco-proxy-tls\"" Dec 08 17:46:23 crc kubenswrapper[5118]: I1208 17:46:23.547707 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-provider-selection\"" Dec 08 17:46:23 crc kubenswrapper[5118]: I1208 17:46:23.629505 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"openshift-service-ca.crt\"" Dec 08 17:46:23 crc kubenswrapper[5118]: I1208 17:46:23.633003 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-operator-tls\"" Dec 08 17:46:23 crc kubenswrapper[5118]: I1208 17:46:23.658892 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"image-import-ca\"" Dec 08 17:46:23 crc kubenswrapper[5118]: I1208 17:46:23.869784 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-root-ca.crt\"" Dec 08 17:46:23 crc kubenswrapper[5118]: I1208 17:46:23.919939 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-config\"" Dec 08 17:46:23 crc kubenswrapper[5118]: I1208 17:46:23.962389 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"trusted-ca\"" Dec 08 17:46:24 crc kubenswrapper[5118]: I1208 17:46:24.076599 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"openshift-service-ca.crt\"" Dec 08 17:46:24 crc kubenswrapper[5118]: I1208 17:46:24.333174 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-tls\"" Dec 08 17:46:24 crc kubenswrapper[5118]: I1208 17:46:24.347145 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-root-ca.crt\"" Dec 08 17:46:24 crc kubenswrapper[5118]: I1208 17:46:24.370243 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-client\"" Dec 08 17:46:24 crc kubenswrapper[5118]: I1208 17:46:24.397565 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"networking-console-plugin-cert\"" Dec 08 17:46:24 crc kubenswrapper[5118]: I1208 17:46:24.437274 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-dockercfg-6tbpn\"" Dec 08 17:46:24 crc kubenswrapper[5118]: I1208 17:46:24.451543 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-tls\"" Dec 08 17:46:24 crc kubenswrapper[5118]: I1208 17:46:24.451544 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serviceaccount-dockercfg-4gqzj\"" Dec 08 17:46:24 crc kubenswrapper[5118]: I1208 17:46:24.543961 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"ovnkube-identity-cm\"" Dec 08 17:46:24 crc kubenswrapper[5118]: I1208 17:46:24.572624 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"openshift-service-ca.crt\"" Dec 08 17:46:24 crc kubenswrapper[5118]: I1208 17:46:24.666171 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-config\"" Dec 08 17:46:24 crc kubenswrapper[5118]: I1208 17:46:24.739525 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"trusted-ca-bundle\"" Dec 08 17:46:24 crc kubenswrapper[5118]: I1208 17:46:24.831720 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-config\"" Dec 08 17:46:24 crc kubenswrapper[5118]: I1208 17:46:24.843044 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"kube-root-ca.crt\"" Dec 08 17:46:24 crc kubenswrapper[5118]: I1208 17:46:24.954685 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"openshift-config-operator-dockercfg-sjn6s\"" Dec 08 17:46:25 crc kubenswrapper[5118]: I1208 17:46:25.016338 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"kube-root-ca.crt\"" Dec 08 17:46:25 crc kubenswrapper[5118]: I1208 17:46:25.026166 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-dockercfg-tnfx9\"" Dec 08 17:46:25 crc kubenswrapper[5118]: I1208 17:46:25.043233 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"trusted-ca-bundle\"" Dec 08 17:46:25 crc kubenswrapper[5118]: I1208 17:46:25.060924 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-default-metrics-tls\"" Dec 08 17:46:25 crc kubenswrapper[5118]: I1208 17:46:25.099362 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"audit\"" Dec 08 17:46:25 crc kubenswrapper[5118]: I1208 17:46:25.153952 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"kube-root-ca.crt\"" Dec 08 17:46:25 crc kubenswrapper[5118]: I1208 17:46:25.243855 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mcc-proxy-tls\"" Dec 08 17:46:25 crc kubenswrapper[5118]: I1208 17:46:25.285479 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-dockercfg-sw6nc\"" Dec 08 17:46:25 crc kubenswrapper[5118]: I1208 17:46:25.440922 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-nwglk\"" Dec 08 17:46:25 crc kubenswrapper[5118]: I1208 17:46:25.502359 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-service-ca-bundle\"" Dec 08 17:46:25 crc kubenswrapper[5118]: I1208 17:46:25.518692 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Dec 08 17:46:25 crc kubenswrapper[5118]: I1208 17:46:25.573470 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"openshift-service-ca.crt\"" Dec 08 17:46:25 crc kubenswrapper[5118]: I1208 17:46:25.824676 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"openshift-service-ca.crt\"" Dec 08 17:46:25 crc kubenswrapper[5118]: I1208 17:46:25.901275 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"kube-root-ca.crt\"" Dec 08 17:46:25 crc kubenswrapper[5118]: I1208 17:46:25.929570 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-serving-cert\"" Dec 08 17:46:25 crc kubenswrapper[5118]: I1208 17:46:25.980160 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-dockercfg-bjqfd\"" Dec 08 17:46:26 crc kubenswrapper[5118]: I1208 17:46:26.017674 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-root-ca.crt\"" Dec 08 17:46:26 crc kubenswrapper[5118]: I1208 17:46:26.024827 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"kube-root-ca.crt\"" Dec 08 17:46:26 crc kubenswrapper[5118]: I1208 17:46:26.117232 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"node-ca-dockercfg-tjs74\"" Dec 08 17:46:26 crc kubenswrapper[5118]: I1208 17:46:26.152552 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"kube-root-ca.crt\"" Dec 08 17:46:26 crc kubenswrapper[5118]: I1208 17:46:26.179617 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"kube-root-ca.crt\"" Dec 08 17:46:26 crc kubenswrapper[5118]: I1208 17:46:26.220940 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-copy-resources\"" Dec 08 17:46:26 crc kubenswrapper[5118]: I1208 17:46:26.221688 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-rbac-proxy\"" Dec 08 17:46:26 crc kubenswrapper[5118]: I1208 17:46:26.254861 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"serving-cert\"" Dec 08 17:46:26 crc kubenswrapper[5118]: I1208 17:46:26.370514 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-images\"" Dec 08 17:46:26 crc kubenswrapper[5118]: I1208 17:46:26.442934 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"dns-default\"" Dec 08 17:46:26 crc kubenswrapper[5118]: I1208 17:46:26.443146 5118 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Dec 08 17:46:26 crc kubenswrapper[5118]: I1208 17:46:26.478158 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-root-ca.crt\"" Dec 08 17:46:26 crc kubenswrapper[5118]: I1208 17:46:26.515990 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"openshift-service-ca.crt\"" Dec 08 17:46:26 crc kubenswrapper[5118]: I1208 17:46:26.516017 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-daemon-dockercfg-w9nzh\"" Dec 08 17:46:26 crc kubenswrapper[5118]: I1208 17:46:26.630542 5118 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Dec 08 17:46:26 crc kubenswrapper[5118]: I1208 17:46:26.651116 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-serving-cert\"" Dec 08 17:46:26 crc kubenswrapper[5118]: I1208 17:46:26.659708 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"openshift-service-ca.crt\"" Dec 08 17:46:26 crc kubenswrapper[5118]: I1208 17:46:26.679858 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serving-cert\"" Dec 08 17:46:26 crc kubenswrapper[5118]: I1208 17:46:26.860426 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-service-ca.crt\"" Dec 08 17:46:26 crc kubenswrapper[5118]: I1208 17:46:26.873216 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Dec 08 17:46:27 crc kubenswrapper[5118]: I1208 17:46:27.007493 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"pprof-cert\"" Dec 08 17:46:27 crc kubenswrapper[5118]: I1208 17:46:27.074774 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"etcd-serving-ca\"" Dec 08 17:46:27 crc kubenswrapper[5118]: I1208 17:46:27.077080 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-client\"" Dec 08 17:46:27 crc kubenswrapper[5118]: I1208 17:46:27.145932 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-dockercfg-gnx66\"" Dec 08 17:46:27 crc kubenswrapper[5118]: I1208 17:46:27.178466 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-config\"" Dec 08 17:46:27 crc kubenswrapper[5118]: I1208 17:46:27.268977 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"signing-key\"" Dec 08 17:46:27 crc kubenswrapper[5118]: I1208 17:46:27.276694 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-certs-default\"" Dec 08 17:46:27 crc kubenswrapper[5118]: I1208 17:46:27.277387 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"trusted-ca\"" Dec 08 17:46:27 crc kubenswrapper[5118]: I1208 17:46:27.347940 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-dockercfg-2wbn2\"" Dec 08 17:46:27 crc kubenswrapper[5118]: I1208 17:46:27.384341 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"signing-cabundle\"" Dec 08 17:46:27 crc kubenswrapper[5118]: I1208 17:46:27.546685 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"config\"" Dec 08 17:46:27 crc kubenswrapper[5118]: I1208 17:46:27.651286 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"serving-cert\"" Dec 08 17:46:27 crc kubenswrapper[5118]: I1208 17:46:27.659288 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"openshift-service-ca.crt\"" Dec 08 17:46:27 crc kubenswrapper[5118]: I1208 17:46:27.660614 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"kube-root-ca.crt\"" Dec 08 17:46:27 crc kubenswrapper[5118]: I1208 17:46:27.708709 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"etcd-client\"" Dec 08 17:46:27 crc kubenswrapper[5118]: I1208 17:46:27.834100 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"openshift-service-ca.crt\"" Dec 08 17:46:27 crc kubenswrapper[5118]: I1208 17:46:27.936325 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-dockercfg-8dkm8\"" Dec 08 17:46:28 crc kubenswrapper[5118]: I1208 17:46:28.183203 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-service-ca\"" Dec 08 17:46:28 crc kubenswrapper[5118]: I1208 17:46:28.206101 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"serving-cert\"" Dec 08 17:46:28 crc kubenswrapper[5118]: I1208 17:46:28.251704 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-scheduler-operator-serving-cert\"" Dec 08 17:46:28 crc kubenswrapper[5118]: I1208 17:46:28.268112 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"hostpath-provisioner\"/\"csi-hostpath-provisioner-sa-dockercfg-7dcws\"" Dec 08 17:46:28 crc kubenswrapper[5118]: I1208 17:46:28.268380 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"multus-daemon-config\"" Dec 08 17:46:28 crc kubenswrapper[5118]: I1208 17:46:28.269784 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-console\"/\"networking-console-plugin\"" Dec 08 17:46:28 crc kubenswrapper[5118]: I1208 17:46:28.364222 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"serving-cert\"" Dec 08 17:46:28 crc kubenswrapper[5118]: I1208 17:46:28.470976 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-dockercfg-bf7fj\"" Dec 08 17:46:28 crc kubenswrapper[5118]: I1208 17:46:28.610071 5118 reflector.go:430] "Caches populated" type="*v1.Node" reflector="k8s.io/client-go/informers/factory.go:160" Dec 08 17:46:28 crc kubenswrapper[5118]: I1208 17:46:28.722788 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"config\"" Dec 08 17:46:28 crc kubenswrapper[5118]: I1208 17:46:28.724258 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-ca-bundle\"" Dec 08 17:46:28 crc kubenswrapper[5118]: I1208 17:46:28.726796 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"openshift-service-ca.crt\"" Dec 08 17:46:28 crc kubenswrapper[5118]: I1208 17:46:28.736705 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"service-ca-dockercfg-bgxvm\"" Dec 08 17:46:28 crc kubenswrapper[5118]: I1208 17:46:28.781067 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-serving-ca\"" Dec 08 17:46:28 crc kubenswrapper[5118]: I1208 17:46:28.795343 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"openshift-service-ca.crt\"" Dec 08 17:46:28 crc kubenswrapper[5118]: I1208 17:46:28.828092 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"kube-root-ca.crt\"" Dec 08 17:46:28 crc kubenswrapper[5118]: I1208 17:46:28.924958 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-metrics-certs-default\"" Dec 08 17:46:28 crc kubenswrapper[5118]: I1208 17:46:28.951901 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-error\"" Dec 08 17:46:28 crc kubenswrapper[5118]: I1208 17:46:28.972429 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"dns-operator-dockercfg-wbbsn\"" Dec 08 17:46:29 crc kubenswrapper[5118]: I1208 17:46:29.109864 5118 reflector.go:430] "Caches populated" type="*v1.Service" reflector="k8s.io/client-go/informers/factory.go:160" Dec 08 17:46:29 crc kubenswrapper[5118]: I1208 17:46:29.218052 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-metrics\"" Dec 08 17:46:29 crc kubenswrapper[5118]: I1208 17:46:29.261574 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"openshift-service-ca.crt\"" Dec 08 17:46:29 crc kubenswrapper[5118]: I1208 17:46:29.326700 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-serving-cert\"" Dec 08 17:46:29 crc kubenswrapper[5118]: I1208 17:46:29.434516 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"trusted-ca\"" Dec 08 17:46:29 crc kubenswrapper[5118]: I1208 17:46:29.459258 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"client-ca\"" Dec 08 17:46:29 crc kubenswrapper[5118]: I1208 17:46:29.495258 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\"" Dec 08 17:46:29 crc kubenswrapper[5118]: I1208 17:46:29.505227 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"serving-cert\"" Dec 08 17:46:29 crc kubenswrapper[5118]: I1208 17:46:29.608902 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-cliconfig\"" Dec 08 17:46:29 crc kubenswrapper[5118]: I1208 17:46:29.709891 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"kube-root-ca.crt\"" Dec 08 17:46:29 crc kubenswrapper[5118]: I1208 17:46:29.787320 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"config\"" Dec 08 17:46:29 crc kubenswrapper[5118]: I1208 17:46:29.821994 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"openshift-service-ca.crt\"" Dec 08 17:46:29 crc kubenswrapper[5118]: I1208 17:46:29.831170 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"serving-cert\"" Dec 08 17:46:29 crc kubenswrapper[5118]: I1208 17:46:29.936312 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"openshift-apiserver-sa-dockercfg-4zqgh\"" Dec 08 17:46:29 crc kubenswrapper[5118]: I1208 17:46:29.998058 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-control-plane-metrics-cert\"" Dec 08 17:46:30 crc kubenswrapper[5118]: I1208 17:46:30.024792 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"audit-1\"" Dec 08 17:46:30 crc kubenswrapper[5118]: I1208 17:46:30.035168 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"config\"" Dec 08 17:46:30 crc kubenswrapper[5118]: I1208 17:46:30.093795 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"metrics-tls\"" Dec 08 17:46:30 crc kubenswrapper[5118]: I1208 17:46:30.099276 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-root-ca.crt\"" Dec 08 17:46:30 crc kubenswrapper[5118]: I1208 17:46:30.115053 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"serving-cert\"" Dec 08 17:46:30 crc kubenswrapper[5118]: I1208 17:46:30.146465 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-storage-version-migrator-sa-dockercfg-kknhg\"" Dec 08 17:46:30 crc kubenswrapper[5118]: I1208 17:46:30.184336 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-config\"" Dec 08 17:46:30 crc kubenswrapper[5118]: I1208 17:46:30.200561 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-tls\"" Dec 08 17:46:30 crc kubenswrapper[5118]: I1208 17:46:30.211475 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-tls\"" Dec 08 17:46:30 crc kubenswrapper[5118]: I1208 17:46:30.224207 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Dec 08 17:46:30 crc kubenswrapper[5118]: I1208 17:46:30.349456 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-config\"" Dec 08 17:46:30 crc kubenswrapper[5118]: I1208 17:46:30.389868 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"cluster-image-registry-operator-dockercfg-ntnd7\"" Dec 08 17:46:30 crc kubenswrapper[5118]: I1208 17:46:30.393774 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"env-overrides\"" Dec 08 17:46:30 crc kubenswrapper[5118]: I1208 17:46:30.394524 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"kube-root-ca.crt\"" Dec 08 17:46:30 crc kubenswrapper[5118]: I1208 17:46:30.409992 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"kube-root-ca.crt\"" Dec 08 17:46:30 crc kubenswrapper[5118]: I1208 17:46:30.523499 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-serving-cert\"" Dec 08 17:46:30 crc kubenswrapper[5118]: I1208 17:46:30.539136 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"service-ca\"" Dec 08 17:46:30 crc kubenswrapper[5118]: I1208 17:46:30.615810 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"openshift-service-ca.crt\"" Dec 08 17:46:30 crc kubenswrapper[5118]: I1208 17:46:30.657359 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"default-dockercfg-g6kgg\"" Dec 08 17:46:30 crc kubenswrapper[5118]: I1208 17:46:30.659112 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-script\"" Dec 08 17:46:30 crc kubenswrapper[5118]: I1208 17:46:30.683552 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"config-operator-serving-cert\"" Dec 08 17:46:30 crc kubenswrapper[5118]: I1208 17:46:30.696773 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"cluster-samples-operator-dockercfg-jmhxf\"" Dec 08 17:46:30 crc kubenswrapper[5118]: I1208 17:46:30.719481 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-dockercfg-2h6bs\"" Dec 08 17:46:30 crc kubenswrapper[5118]: I1208 17:46:30.740184 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ac-dockercfg-gj7jx\"" Dec 08 17:46:30 crc kubenswrapper[5118]: I1208 17:46:30.770975 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-dockercfg-6c46w\"" Dec 08 17:46:30 crc kubenswrapper[5118]: I1208 17:46:30.811777 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"trusted-ca-bundle\"" Dec 08 17:46:30 crc kubenswrapper[5118]: I1208 17:46:30.851408 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\"" Dec 08 17:46:30 crc kubenswrapper[5118]: I1208 17:46:30.852701 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"canary-serving-cert\"" Dec 08 17:46:30 crc kubenswrapper[5118]: I1208 17:46:30.866075 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Dec 08 17:46:30 crc kubenswrapper[5118]: I1208 17:46:30.875080 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-ocp-branding-template\"" Dec 08 17:46:30 crc kubenswrapper[5118]: I1208 17:46:30.918976 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-6w67b\"" Dec 08 17:46:30 crc kubenswrapper[5118]: I1208 17:46:30.952752 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"service-ca-bundle\"" Dec 08 17:46:31 crc kubenswrapper[5118]: I1208 17:46:31.009346 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"kube-root-ca.crt\"" Dec 08 17:46:31 crc kubenswrapper[5118]: I1208 17:46:31.016937 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-session\"" Dec 08 17:46:31 crc kubenswrapper[5118]: I1208 17:46:31.032405 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-router-certs\"" Dec 08 17:46:31 crc kubenswrapper[5118]: I1208 17:46:31.086825 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"proxy-tls\"" Dec 08 17:46:31 crc kubenswrapper[5118]: I1208 17:46:31.112624 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"console-operator-config\"" Dec 08 17:46:31 crc kubenswrapper[5118]: I1208 17:46:31.112638 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-sa-dockercfg-wzhvk\"" Dec 08 17:46:31 crc kubenswrapper[5118]: I1208 17:46:31.191680 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Dec 08 17:46:31 crc kubenswrapper[5118]: I1208 17:46:31.237074 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"openshift-service-ca.crt\"" Dec 08 17:46:31 crc kubenswrapper[5118]: I1208 17:46:31.272126 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-idp-0-file-data\"" Dec 08 17:46:31 crc kubenswrapper[5118]: I1208 17:46:31.301949 5118 reflector.go:430] "Caches populated" type="*v1.Pod" reflector="pkg/kubelet/config/apiserver.go:66" Dec 08 17:46:31 crc kubenswrapper[5118]: I1208 17:46:31.306313 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=41.306292835 podStartE2EDuration="41.306292835s" podCreationTimestamp="2025-12-08 17:45:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:46:10.582350383 +0000 UTC m=+227.483674507" watchObservedRunningTime="2025-12-08 17:46:31.306292835 +0000 UTC m=+248.207616929" Dec 08 17:46:31 crc kubenswrapper[5118]: I1208 17:46:31.308010 5118 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 08 17:46:31 crc kubenswrapper[5118]: I1208 17:46:31.308157 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 08 17:46:31 crc kubenswrapper[5118]: I1208 17:46:31.308676 5118 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="264185e3-872b-4c02-a81a-b4ed66da2e56" Dec 08 17:46:31 crc kubenswrapper[5118]: I1208 17:46:31.308709 5118 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="264185e3-872b-4c02-a81a-b4ed66da2e56" Dec 08 17:46:31 crc kubenswrapper[5118]: I1208 17:46:31.314580 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:46:31 crc kubenswrapper[5118]: I1208 17:46:31.327982 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=21.327964614 podStartE2EDuration="21.327964614s" podCreationTimestamp="2025-12-08 17:46:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:46:31.327161601 +0000 UTC m=+248.228485735" watchObservedRunningTime="2025-12-08 17:46:31.327964614 +0000 UTC m=+248.229288728" Dec 08 17:46:31 crc kubenswrapper[5118]: I1208 17:46:31.376913 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-rbac-proxy\"" Dec 08 17:46:31 crc kubenswrapper[5118]: I1208 17:46:31.416576 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"kube-root-ca.crt\"" Dec 08 17:46:31 crc kubenswrapper[5118]: I1208 17:46:31.488283 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\"" Dec 08 17:46:31 crc kubenswrapper[5118]: I1208 17:46:31.554208 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"openshift-service-ca.crt\"" Dec 08 17:46:31 crc kubenswrapper[5118]: I1208 17:46:31.580963 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"default-dockercfg-mdwwj\"" Dec 08 17:46:31 crc kubenswrapper[5118]: I1208 17:46:31.590382 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"encryption-config-1\"" Dec 08 17:46:31 crc kubenswrapper[5118]: I1208 17:46:31.671604 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"kube-root-ca.crt\"" Dec 08 17:46:31 crc kubenswrapper[5118]: I1208 17:46:31.710635 5118 ???:1] "http: TLS handshake error from 192.168.126.11:38798: no serving certificate available for the kubelet" Dec 08 17:46:31 crc kubenswrapper[5118]: I1208 17:46:31.843929 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"kube-root-ca.crt\"" Dec 08 17:46:31 crc kubenswrapper[5118]: I1208 17:46:31.888091 5118 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Dec 08 17:46:31 crc kubenswrapper[5118]: I1208 17:46:31.888178 5118 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Dec 08 17:46:31 crc kubenswrapper[5118]: I1208 17:46:31.888238 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 17:46:31 crc kubenswrapper[5118]: I1208 17:46:31.888912 5118 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"77dde1c45fb3930d55762566cc80737f8bfecc6cd544e4481944dc146ae105d9"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Dec 08 17:46:31 crc kubenswrapper[5118]: I1208 17:46:31.889084 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="kube-controller-manager" containerID="cri-o://77dde1c45fb3930d55762566cc80737f8bfecc6cd544e4481944dc146ae105d9" gracePeriod=30 Dec 08 17:46:31 crc kubenswrapper[5118]: I1208 17:46:31.962470 5118 patch_prober.go:28] interesting pod/machine-config-daemon-8vxnt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 17:46:31 crc kubenswrapper[5118]: I1208 17:46:31.962561 5118 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8vxnt" podUID="cee6a3dc-47d4-4996-9c78-cb6c6b626d71" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 17:46:32 crc kubenswrapper[5118]: I1208 17:46:32.026194 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-service-ca.crt\"" Dec 08 17:46:32 crc kubenswrapper[5118]: I1208 17:46:32.087500 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-config\"" Dec 08 17:46:32 crc kubenswrapper[5118]: I1208 17:46:32.104087 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"oauth-serving-cert\"" Dec 08 17:46:32 crc kubenswrapper[5118]: I1208 17:46:32.180537 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-control-plane-dockercfg-nl8tp\"" Dec 08 17:46:32 crc kubenswrapper[5118]: I1208 17:46:32.239241 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-dockercfg-dzw6b\"" Dec 08 17:46:32 crc kubenswrapper[5118]: I1208 17:46:32.326205 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"kube-root-ca.crt\"" Dec 08 17:46:32 crc kubenswrapper[5118]: I1208 17:46:32.349550 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\"" Dec 08 17:46:32 crc kubenswrapper[5118]: I1208 17:46:32.452958 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-global-ca\"" Dec 08 17:46:32 crc kubenswrapper[5118]: I1208 17:46:32.466383 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"ingress-operator-dockercfg-74nwh\"" Dec 08 17:46:32 crc kubenswrapper[5118]: I1208 17:46:32.492328 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"cluster-version-operator-serving-cert\"" Dec 08 17:46:32 crc kubenswrapper[5118]: I1208 17:46:32.536031 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"oauth-apiserver-sa-dockercfg-qqw4z\"" Dec 08 17:46:32 crc kubenswrapper[5118]: I1208 17:46:32.622192 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"packageserver-service-cert\"" Dec 08 17:46:32 crc kubenswrapper[5118]: I1208 17:46:32.725317 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-operator\"/\"metrics-tls\"" Dec 08 17:46:32 crc kubenswrapper[5118]: I1208 17:46:32.728633 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\"" Dec 08 17:46:32 crc kubenswrapper[5118]: I1208 17:46:32.752862 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-node-identity\"/\"network-node-identity-cert\"" Dec 08 17:46:32 crc kubenswrapper[5118]: I1208 17:46:32.825986 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"openshift-service-ca.crt\"" Dec 08 17:46:32 crc kubenswrapper[5118]: I1208 17:46:32.863500 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"openshift-service-ca.crt\"" Dec 08 17:46:32 crc kubenswrapper[5118]: I1208 17:46:32.867184 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"catalog-operator-serving-cert\"" Dec 08 17:46:32 crc kubenswrapper[5118]: I1208 17:46:32.892764 5118 kubelet.go:2547] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 08 17:46:32 crc kubenswrapper[5118]: I1208 17:46:32.893092 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" containerID="cri-o://282e0270b82b4bc33057c7b45474f8c44521902b90a91947d4d7f5053f96bcc5" gracePeriod=5 Dec 08 17:46:32 crc kubenswrapper[5118]: I1208 17:46:32.985122 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-oauth-config\"" Dec 08 17:46:33 crc kubenswrapper[5118]: I1208 17:46:33.017770 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-serving-cert\"" Dec 08 17:46:33 crc kubenswrapper[5118]: I1208 17:46:33.018007 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-dockercfg-4vdnc\"" Dec 08 17:46:33 crc kubenswrapper[5118]: I1208 17:46:33.044572 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-tls\"" Dec 08 17:46:33 crc kubenswrapper[5118]: I1208 17:46:33.555147 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"console-operator-dockercfg-kl6m8\"" Dec 08 17:46:33 crc kubenswrapper[5118]: I1208 17:46:33.604945 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-l2v2m\"" Dec 08 17:46:33 crc kubenswrapper[5118]: I1208 17:46:33.652709 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"installation-pull-secrets\"" Dec 08 17:46:33 crc kubenswrapper[5118]: I1208 17:46:33.684656 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-9pgs7\"" Dec 08 17:46:33 crc kubenswrapper[5118]: I1208 17:46:33.689582 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"console-config\"" Dec 08 17:46:33 crc kubenswrapper[5118]: I1208 17:46:33.714106 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"serving-cert\"" Dec 08 17:46:33 crc kubenswrapper[5118]: I1208 17:46:33.869734 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"kube-root-ca.crt\"" Dec 08 17:46:33 crc kubenswrapper[5118]: I1208 17:46:33.904333 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-config\"" Dec 08 17:46:33 crc kubenswrapper[5118]: I1208 17:46:33.929660 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"kube-root-ca.crt\"" Dec 08 17:46:33 crc kubenswrapper[5118]: I1208 17:46:33.939743 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\"" Dec 08 17:46:33 crc kubenswrapper[5118]: I1208 17:46:33.941197 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"encryption-config-1\"" Dec 08 17:46:33 crc kubenswrapper[5118]: I1208 17:46:33.962126 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"audit-1\"" Dec 08 17:46:33 crc kubenswrapper[5118]: I1208 17:46:33.964485 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"kube-root-ca.crt\"" Dec 08 17:46:34 crc kubenswrapper[5118]: I1208 17:46:34.012777 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-root-ca.crt\"" Dec 08 17:46:34 crc kubenswrapper[5118]: I1208 17:46:34.041206 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"kube-root-ca.crt\"" Dec 08 17:46:34 crc kubenswrapper[5118]: I1208 17:46:34.154362 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-kpvmz\"" Dec 08 17:46:34 crc kubenswrapper[5118]: I1208 17:46:34.206636 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-dockercfg-2cfkp\"" Dec 08 17:46:34 crc kubenswrapper[5118]: I1208 17:46:34.226290 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"kube-root-ca.crt\"" Dec 08 17:46:34 crc kubenswrapper[5118]: I1208 17:46:34.432967 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\"" Dec 08 17:46:34 crc kubenswrapper[5118]: I1208 17:46:34.456770 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"image-registry-certificates\"" Dec 08 17:46:34 crc kubenswrapper[5118]: I1208 17:46:34.499233 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-stats-default\"" Dec 08 17:46:34 crc kubenswrapper[5118]: I1208 17:46:34.534505 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-serving-cert\"" Dec 08 17:46:34 crc kubenswrapper[5118]: I1208 17:46:34.568224 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"openshift-service-ca.crt\"" Dec 08 17:46:34 crc kubenswrapper[5118]: I1208 17:46:34.601317 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"whereabouts-flatfile-config\"" Dec 08 17:46:34 crc kubenswrapper[5118]: I1208 17:46:34.601488 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Dec 08 17:46:34 crc kubenswrapper[5118]: I1208 17:46:34.609549 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"oauth-openshift-dockercfg-d2bf2\"" Dec 08 17:46:34 crc kubenswrapper[5118]: I1208 17:46:34.644110 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"node-resolver-dockercfg-tk7bt\"" Dec 08 17:46:34 crc kubenswrapper[5118]: I1208 17:46:34.673838 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\"" Dec 08 17:46:34 crc kubenswrapper[5118]: I1208 17:46:34.730956 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-images\"" Dec 08 17:46:34 crc kubenswrapper[5118]: I1208 17:46:34.775570 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"default-dockercfg-hqpm5\"" Dec 08 17:46:34 crc kubenswrapper[5118]: I1208 17:46:34.859470 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"openshift-service-ca.crt\"" Dec 08 17:46:35 crc kubenswrapper[5118]: I1208 17:46:35.084806 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"package-server-manager-serving-cert\"" Dec 08 17:46:35 crc kubenswrapper[5118]: I1208 17:46:35.259758 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"client-ca\"" Dec 08 17:46:35 crc kubenswrapper[5118]: I1208 17:46:35.772247 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"kube-root-ca.crt\"" Dec 08 17:46:35 crc kubenswrapper[5118]: I1208 17:46:35.825387 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Dec 08 17:46:36 crc kubenswrapper[5118]: I1208 17:46:36.001154 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"openshift-service-ca.crt\"" Dec 08 17:46:36 crc kubenswrapper[5118]: I1208 17:46:36.040943 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"openshift-service-ca.crt\"" Dec 08 17:46:36 crc kubenswrapper[5118]: I1208 17:46:36.158021 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"openshift-service-ca.crt\"" Dec 08 17:46:36 crc kubenswrapper[5118]: I1208 17:46:36.294424 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-root-ca.crt\"" Dec 08 17:46:36 crc kubenswrapper[5118]: I1208 17:46:36.429698 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"openshift-service-ca.crt\"" Dec 08 17:46:36 crc kubenswrapper[5118]: I1208 17:46:36.449401 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-dockercfg-kw8fx\"" Dec 08 17:46:36 crc kubenswrapper[5118]: I1208 17:46:36.456147 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-trusted-ca-bundle\"" Dec 08 17:46:36 crc kubenswrapper[5118]: I1208 17:46:36.560801 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-root-ca.crt\"" Dec 08 17:46:36 crc kubenswrapper[5118]: I1208 17:46:36.884431 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-login\"" Dec 08 17:46:36 crc kubenswrapper[5118]: I1208 17:46:36.892196 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"openshift-service-ca.crt\"" Dec 08 17:46:37 crc kubenswrapper[5118]: I1208 17:46:37.061425 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"marketplace-trusted-ca\"" Dec 08 17:46:37 crc kubenswrapper[5118]: I1208 17:46:37.082838 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"openshift-service-ca.crt\"" Dec 08 17:46:37 crc kubenswrapper[5118]: I1208 17:46:37.222371 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"samples-operator-tls\"" Dec 08 17:46:37 crc kubenswrapper[5118]: I1208 17:46:37.453189 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-config\"" Dec 08 17:46:38 crc kubenswrapper[5118]: I1208 17:46:38.035276 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"openshift-service-ca.crt\"" Dec 08 17:46:38 crc kubenswrapper[5118]: I1208 17:46:38.042073 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-admission-controller-secret\"" Dec 08 17:46:38 crc kubenswrapper[5118]: I1208 17:46:38.089791 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"openshift-service-ca.crt\"" Dec 08 17:46:38 crc kubenswrapper[5118]: I1208 17:46:38.477307 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f7dbc7e1ee9c187a863ef9b473fad27b/startup-monitor/0.log" Dec 08 17:46:38 crc kubenswrapper[5118]: I1208 17:46:38.477383 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 17:46:38 crc kubenswrapper[5118]: I1208 17:46:38.589956 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Dec 08 17:46:38 crc kubenswrapper[5118]: I1208 17:46:38.590106 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Dec 08 17:46:38 crc kubenswrapper[5118]: I1208 17:46:38.590196 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Dec 08 17:46:38 crc kubenswrapper[5118]: I1208 17:46:38.590293 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock" (OuterVolumeSpecName: "var-lock") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 17:46:38 crc kubenswrapper[5118]: I1208 17:46:38.590323 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Dec 08 17:46:38 crc kubenswrapper[5118]: I1208 17:46:38.590392 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Dec 08 17:46:38 crc kubenswrapper[5118]: I1208 17:46:38.590517 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests" (OuterVolumeSpecName: "manifests") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 17:46:38 crc kubenswrapper[5118]: I1208 17:46:38.590573 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 17:46:38 crc kubenswrapper[5118]: I1208 17:46:38.590587 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log" (OuterVolumeSpecName: "var-log") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 17:46:38 crc kubenswrapper[5118]: I1208 17:46:38.591249 5118 reconciler_common.go:299] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 08 17:46:38 crc kubenswrapper[5118]: I1208 17:46:38.591278 5118 reconciler_common.go:299] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") on node \"crc\" DevicePath \"\"" Dec 08 17:46:38 crc kubenswrapper[5118]: I1208 17:46:38.591291 5118 reconciler_common.go:299] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") on node \"crc\" DevicePath \"\"" Dec 08 17:46:38 crc kubenswrapper[5118]: I1208 17:46:38.591302 5118 reconciler_common.go:299] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") on node \"crc\" DevicePath \"\"" Dec 08 17:46:38 crc kubenswrapper[5118]: I1208 17:46:38.603444 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 17:46:38 crc kubenswrapper[5118]: I1208 17:46:38.674584 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"kube-root-ca.crt\"" Dec 08 17:46:38 crc kubenswrapper[5118]: I1208 17:46:38.692839 5118 reconciler_common.go:299] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 08 17:46:38 crc kubenswrapper[5118]: I1208 17:46:38.741413 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f7dbc7e1ee9c187a863ef9b473fad27b/startup-monitor/0.log" Dec 08 17:46:38 crc kubenswrapper[5118]: I1208 17:46:38.741470 5118 generic.go:358] "Generic (PLEG): container finished" podID="f7dbc7e1ee9c187a863ef9b473fad27b" containerID="282e0270b82b4bc33057c7b45474f8c44521902b90a91947d4d7f5053f96bcc5" exitCode=137 Dec 08 17:46:38 crc kubenswrapper[5118]: I1208 17:46:38.741536 5118 scope.go:117] "RemoveContainer" containerID="282e0270b82b4bc33057c7b45474f8c44521902b90a91947d4d7f5053f96bcc5" Dec 08 17:46:38 crc kubenswrapper[5118]: I1208 17:46:38.742379 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 17:46:38 crc kubenswrapper[5118]: I1208 17:46:38.760189 5118 scope.go:117] "RemoveContainer" containerID="282e0270b82b4bc33057c7b45474f8c44521902b90a91947d4d7f5053f96bcc5" Dec 08 17:46:38 crc kubenswrapper[5118]: E1208 17:46:38.760933 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"282e0270b82b4bc33057c7b45474f8c44521902b90a91947d4d7f5053f96bcc5\": container with ID starting with 282e0270b82b4bc33057c7b45474f8c44521902b90a91947d4d7f5053f96bcc5 not found: ID does not exist" containerID="282e0270b82b4bc33057c7b45474f8c44521902b90a91947d4d7f5053f96bcc5" Dec 08 17:46:38 crc kubenswrapper[5118]: I1208 17:46:38.761176 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"282e0270b82b4bc33057c7b45474f8c44521902b90a91947d4d7f5053f96bcc5"} err="failed to get container status \"282e0270b82b4bc33057c7b45474f8c44521902b90a91947d4d7f5053f96bcc5\": rpc error: code = NotFound desc = could not find container \"282e0270b82b4bc33057c7b45474f8c44521902b90a91947d4d7f5053f96bcc5\": container with ID starting with 282e0270b82b4bc33057c7b45474f8c44521902b90a91947d4d7f5053f96bcc5 not found: ID does not exist" Dec 08 17:46:39 crc kubenswrapper[5118]: I1208 17:46:39.405503 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-dockercfg-6n5ln\"" Dec 08 17:46:39 crc kubenswrapper[5118]: I1208 17:46:39.431839 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"service-ca-bundle\"" Dec 08 17:46:39 crc kubenswrapper[5118]: I1208 17:46:39.438303 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" path="/var/lib/kubelet/pods/f7dbc7e1ee9c187a863ef9b473fad27b/volumes" Dec 08 17:46:39 crc kubenswrapper[5118]: I1208 17:46:39.438739 5118 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Dec 08 17:46:39 crc kubenswrapper[5118]: I1208 17:46:39.447543 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"kube-root-ca.crt\"" Dec 08 17:46:39 crc kubenswrapper[5118]: I1208 17:46:39.451399 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 08 17:46:39 crc kubenswrapper[5118]: I1208 17:46:39.451626 5118 kubelet.go:2759] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="a5a3d875-4690-4d15-863d-951b173e90b5" Dec 08 17:46:39 crc kubenswrapper[5118]: I1208 17:46:39.456192 5118 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 08 17:46:39 crc kubenswrapper[5118]: I1208 17:46:39.456253 5118 kubelet.go:2784] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="a5a3d875-4690-4d15-863d-951b173e90b5" Dec 08 17:46:39 crc kubenswrapper[5118]: I1208 17:46:39.869655 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"kube-root-ca.crt\"" Dec 08 17:46:52 crc kubenswrapper[5118]: I1208 17:46:52.819158 5118 generic.go:358] "Generic (PLEG): container finished" podID="9af82654-06bc-4376-bff5-d6adacce9785" containerID="79455a2a0ec6c3aa629647780ba144ff7b1a2c579b6813f48db3d05f01da840e" exitCode=0 Dec 08 17:46:52 crc kubenswrapper[5118]: I1208 17:46:52.819226 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-85wdh" event={"ID":"9af82654-06bc-4376-bff5-d6adacce9785","Type":"ContainerDied","Data":"79455a2a0ec6c3aa629647780ba144ff7b1a2c579b6813f48db3d05f01da840e"} Dec 08 17:46:52 crc kubenswrapper[5118]: I1208 17:46:52.822090 5118 scope.go:117] "RemoveContainer" containerID="79455a2a0ec6c3aa629647780ba144ff7b1a2c579b6813f48db3d05f01da840e" Dec 08 17:46:53 crc kubenswrapper[5118]: I1208 17:46:53.827780 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-85wdh" event={"ID":"9af82654-06bc-4376-bff5-d6adacce9785","Type":"ContainerStarted","Data":"722c18c7204724e7ab57760c6f7523235484ab7bf0593e7f00c4d92b2730cc13"} Dec 08 17:46:53 crc kubenswrapper[5118]: I1208 17:46:53.829141 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-85wdh" Dec 08 17:46:53 crc kubenswrapper[5118]: I1208 17:46:53.830712 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-85wdh" Dec 08 17:47:01 crc kubenswrapper[5118]: I1208 17:47:01.962792 5118 patch_prober.go:28] interesting pod/machine-config-daemon-8vxnt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 17:47:01 crc kubenswrapper[5118]: I1208 17:47:01.963398 5118 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8vxnt" podUID="cee6a3dc-47d4-4996-9c78-cb6c6b626d71" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 17:47:02 crc kubenswrapper[5118]: I1208 17:47:02.883328 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Dec 08 17:47:02 crc kubenswrapper[5118]: I1208 17:47:02.885730 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 08 17:47:02 crc kubenswrapper[5118]: I1208 17:47:02.885801 5118 generic.go:358] "Generic (PLEG): container finished" podID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerID="77dde1c45fb3930d55762566cc80737f8bfecc6cd544e4481944dc146ae105d9" exitCode=137 Dec 08 17:47:02 crc kubenswrapper[5118]: I1208 17:47:02.885858 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerDied","Data":"77dde1c45fb3930d55762566cc80737f8bfecc6cd544e4481944dc146ae105d9"} Dec 08 17:47:02 crc kubenswrapper[5118]: I1208 17:47:02.885951 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"4f53173e9e1dd4a679c5b4d848eeb176200c2cde5757f590218778b04e530907"} Dec 08 17:47:02 crc kubenswrapper[5118]: I1208 17:47:02.885997 5118 scope.go:117] "RemoveContainer" containerID="cfca8d494212b21c8a44513c5dd06e44549b08479d7bf1138bd5fb15936ccee8" Dec 08 17:47:03 crc kubenswrapper[5118]: I1208 17:47:03.895380 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Dec 08 17:47:05 crc kubenswrapper[5118]: I1208 17:47:05.424592 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 17:47:09 crc kubenswrapper[5118]: I1208 17:47:09.285096 5118 ???:1] "http: TLS handshake error from 192.168.126.11:42204: no serving certificate available for the kubelet" Dec 08 17:47:11 crc kubenswrapper[5118]: I1208 17:47:11.887324 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 17:47:11 crc kubenswrapper[5118]: I1208 17:47:11.895466 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 17:47:21 crc kubenswrapper[5118]: I1208 17:47:21.949513 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 17:47:22 crc kubenswrapper[5118]: I1208 17:47:22.072791 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-qkg2q"] Dec 08 17:47:22 crc kubenswrapper[5118]: I1208 17:47:22.073424 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-qkg2q" podUID="32bb589d-b6b8-4ab2-a9a2-5bae968bd2c6" containerName="route-controller-manager" containerID="cri-o://c43168f83a6d51b5e882078c0183a3effc8258c2f200874ceb15ad3cc30aad5f" gracePeriod=30 Dec 08 17:47:22 crc kubenswrapper[5118]: I1208 17:47:22.076768 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-6wjgz"] Dec 08 17:47:22 crc kubenswrapper[5118]: I1208 17:47:22.077098 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-65b6cccf98-6wjgz" podUID="8dcd2702-e20f-439b-b2c7-27095126b87e" containerName="controller-manager" containerID="cri-o://3f14ba348594a64bde7d8f58092259d152ec3a1780af9beb54de7fa70aa50ecb" gracePeriod=30 Dec 08 17:47:22 crc kubenswrapper[5118]: I1208 17:47:22.538141 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-qkg2q" Dec 08 17:47:22 crc kubenswrapper[5118]: I1208 17:47:22.544723 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-6wjgz" Dec 08 17:47:22 crc kubenswrapper[5118]: I1208 17:47:22.563661 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6975b9f87f-8vkdj"] Dec 08 17:47:22 crc kubenswrapper[5118]: I1208 17:47:22.564395 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="158725bd-7556-4281-a3cb-acaa6baf5d8c" containerName="installer" Dec 08 17:47:22 crc kubenswrapper[5118]: I1208 17:47:22.564419 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="158725bd-7556-4281-a3cb-acaa6baf5d8c" containerName="installer" Dec 08 17:47:22 crc kubenswrapper[5118]: I1208 17:47:22.564433 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8dcd2702-e20f-439b-b2c7-27095126b87e" containerName="controller-manager" Dec 08 17:47:22 crc kubenswrapper[5118]: I1208 17:47:22.564443 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="8dcd2702-e20f-439b-b2c7-27095126b87e" containerName="controller-manager" Dec 08 17:47:22 crc kubenswrapper[5118]: I1208 17:47:22.564472 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Dec 08 17:47:22 crc kubenswrapper[5118]: I1208 17:47:22.564479 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Dec 08 17:47:22 crc kubenswrapper[5118]: I1208 17:47:22.564503 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="32bb589d-b6b8-4ab2-a9a2-5bae968bd2c6" containerName="route-controller-manager" Dec 08 17:47:22 crc kubenswrapper[5118]: I1208 17:47:22.564510 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="32bb589d-b6b8-4ab2-a9a2-5bae968bd2c6" containerName="route-controller-manager" Dec 08 17:47:22 crc kubenswrapper[5118]: I1208 17:47:22.564614 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="158725bd-7556-4281-a3cb-acaa6baf5d8c" containerName="installer" Dec 08 17:47:22 crc kubenswrapper[5118]: I1208 17:47:22.564630 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="32bb589d-b6b8-4ab2-a9a2-5bae968bd2c6" containerName="route-controller-manager" Dec 08 17:47:22 crc kubenswrapper[5118]: I1208 17:47:22.564639 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="8dcd2702-e20f-439b-b2c7-27095126b87e" containerName="controller-manager" Dec 08 17:47:22 crc kubenswrapper[5118]: I1208 17:47:22.564651 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Dec 08 17:47:22 crc kubenswrapper[5118]: I1208 17:47:22.570300 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6975b9f87f-8vkdj" Dec 08 17:47:22 crc kubenswrapper[5118]: I1208 17:47:22.581766 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8dcd2702-e20f-439b-b2c7-27095126b87e-config\") pod \"8dcd2702-e20f-439b-b2c7-27095126b87e\" (UID: \"8dcd2702-e20f-439b-b2c7-27095126b87e\") " Dec 08 17:47:22 crc kubenswrapper[5118]: I1208 17:47:22.581860 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/8dcd2702-e20f-439b-b2c7-27095126b87e-tmp\") pod \"8dcd2702-e20f-439b-b2c7-27095126b87e\" (UID: \"8dcd2702-e20f-439b-b2c7-27095126b87e\") " Dec 08 17:47:22 crc kubenswrapper[5118]: I1208 17:47:22.581953 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/32bb589d-b6b8-4ab2-a9a2-5bae968bd2c6-serving-cert\") pod \"32bb589d-b6b8-4ab2-a9a2-5bae968bd2c6\" (UID: \"32bb589d-b6b8-4ab2-a9a2-5bae968bd2c6\") " Dec 08 17:47:22 crc kubenswrapper[5118]: I1208 17:47:22.582035 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/32bb589d-b6b8-4ab2-a9a2-5bae968bd2c6-client-ca\") pod \"32bb589d-b6b8-4ab2-a9a2-5bae968bd2c6\" (UID: \"32bb589d-b6b8-4ab2-a9a2-5bae968bd2c6\") " Dec 08 17:47:22 crc kubenswrapper[5118]: I1208 17:47:22.582105 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2t9zh\" (UniqueName: \"kubernetes.io/projected/32bb589d-b6b8-4ab2-a9a2-5bae968bd2c6-kube-api-access-2t9zh\") pod \"32bb589d-b6b8-4ab2-a9a2-5bae968bd2c6\" (UID: \"32bb589d-b6b8-4ab2-a9a2-5bae968bd2c6\") " Dec 08 17:47:22 crc kubenswrapper[5118]: I1208 17:47:22.582191 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8dcd2702-e20f-439b-b2c7-27095126b87e-proxy-ca-bundles\") pod \"8dcd2702-e20f-439b-b2c7-27095126b87e\" (UID: \"8dcd2702-e20f-439b-b2c7-27095126b87e\") " Dec 08 17:47:22 crc kubenswrapper[5118]: I1208 17:47:22.582217 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lkd6h\" (UniqueName: \"kubernetes.io/projected/8dcd2702-e20f-439b-b2c7-27095126b87e-kube-api-access-lkd6h\") pod \"8dcd2702-e20f-439b-b2c7-27095126b87e\" (UID: \"8dcd2702-e20f-439b-b2c7-27095126b87e\") " Dec 08 17:47:22 crc kubenswrapper[5118]: I1208 17:47:22.582373 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32bb589d-b6b8-4ab2-a9a2-5bae968bd2c6-config\") pod \"32bb589d-b6b8-4ab2-a9a2-5bae968bd2c6\" (UID: \"32bb589d-b6b8-4ab2-a9a2-5bae968bd2c6\") " Dec 08 17:47:22 crc kubenswrapper[5118]: I1208 17:47:22.582423 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8dcd2702-e20f-439b-b2c7-27095126b87e-serving-cert\") pod \"8dcd2702-e20f-439b-b2c7-27095126b87e\" (UID: \"8dcd2702-e20f-439b-b2c7-27095126b87e\") " Dec 08 17:47:22 crc kubenswrapper[5118]: I1208 17:47:22.582463 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/32bb589d-b6b8-4ab2-a9a2-5bae968bd2c6-tmp\") pod \"32bb589d-b6b8-4ab2-a9a2-5bae968bd2c6\" (UID: \"32bb589d-b6b8-4ab2-a9a2-5bae968bd2c6\") " Dec 08 17:47:22 crc kubenswrapper[5118]: I1208 17:47:22.582544 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8dcd2702-e20f-439b-b2c7-27095126b87e-client-ca\") pod \"8dcd2702-e20f-439b-b2c7-27095126b87e\" (UID: \"8dcd2702-e20f-439b-b2c7-27095126b87e\") " Dec 08 17:47:22 crc kubenswrapper[5118]: I1208 17:47:22.583767 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8dcd2702-e20f-439b-b2c7-27095126b87e-tmp" (OuterVolumeSpecName: "tmp") pod "8dcd2702-e20f-439b-b2c7-27095126b87e" (UID: "8dcd2702-e20f-439b-b2c7-27095126b87e"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:47:22 crc kubenswrapper[5118]: I1208 17:47:22.584299 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32bb589d-b6b8-4ab2-a9a2-5bae968bd2c6-config" (OuterVolumeSpecName: "config") pod "32bb589d-b6b8-4ab2-a9a2-5bae968bd2c6" (UID: "32bb589d-b6b8-4ab2-a9a2-5bae968bd2c6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:47:22 crc kubenswrapper[5118]: I1208 17:47:22.584402 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8dcd2702-e20f-439b-b2c7-27095126b87e-config" (OuterVolumeSpecName: "config") pod "8dcd2702-e20f-439b-b2c7-27095126b87e" (UID: "8dcd2702-e20f-439b-b2c7-27095126b87e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:47:22 crc kubenswrapper[5118]: I1208 17:47:22.584836 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8dcd2702-e20f-439b-b2c7-27095126b87e-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "8dcd2702-e20f-439b-b2c7-27095126b87e" (UID: "8dcd2702-e20f-439b-b2c7-27095126b87e"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:47:22 crc kubenswrapper[5118]: I1208 17:47:22.585953 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6975b9f87f-8vkdj"] Dec 08 17:47:22 crc kubenswrapper[5118]: I1208 17:47:22.586527 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32bb589d-b6b8-4ab2-a9a2-5bae968bd2c6-client-ca" (OuterVolumeSpecName: "client-ca") pod "32bb589d-b6b8-4ab2-a9a2-5bae968bd2c6" (UID: "32bb589d-b6b8-4ab2-a9a2-5bae968bd2c6"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:47:22 crc kubenswrapper[5118]: I1208 17:47:22.586822 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/32bb589d-b6b8-4ab2-a9a2-5bae968bd2c6-tmp" (OuterVolumeSpecName: "tmp") pod "32bb589d-b6b8-4ab2-a9a2-5bae968bd2c6" (UID: "32bb589d-b6b8-4ab2-a9a2-5bae968bd2c6"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:47:22 crc kubenswrapper[5118]: I1208 17:47:22.589070 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8dcd2702-e20f-439b-b2c7-27095126b87e-client-ca" (OuterVolumeSpecName: "client-ca") pod "8dcd2702-e20f-439b-b2c7-27095126b87e" (UID: "8dcd2702-e20f-439b-b2c7-27095126b87e"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:47:22 crc kubenswrapper[5118]: I1208 17:47:22.598240 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32bb589d-b6b8-4ab2-a9a2-5bae968bd2c6-kube-api-access-2t9zh" (OuterVolumeSpecName: "kube-api-access-2t9zh") pod "32bb589d-b6b8-4ab2-a9a2-5bae968bd2c6" (UID: "32bb589d-b6b8-4ab2-a9a2-5bae968bd2c6"). InnerVolumeSpecName "kube-api-access-2t9zh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:47:22 crc kubenswrapper[5118]: I1208 17:47:22.599631 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8dcd2702-e20f-439b-b2c7-27095126b87e-kube-api-access-lkd6h" (OuterVolumeSpecName: "kube-api-access-lkd6h") pod "8dcd2702-e20f-439b-b2c7-27095126b87e" (UID: "8dcd2702-e20f-439b-b2c7-27095126b87e"). InnerVolumeSpecName "kube-api-access-lkd6h". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:47:22 crc kubenswrapper[5118]: I1208 17:47:22.599812 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8dcd2702-e20f-439b-b2c7-27095126b87e-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8dcd2702-e20f-439b-b2c7-27095126b87e" (UID: "8dcd2702-e20f-439b-b2c7-27095126b87e"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:47:22 crc kubenswrapper[5118]: I1208 17:47:22.599819 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32bb589d-b6b8-4ab2-a9a2-5bae968bd2c6-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "32bb589d-b6b8-4ab2-a9a2-5bae968bd2c6" (UID: "32bb589d-b6b8-4ab2-a9a2-5bae968bd2c6"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:47:22 crc kubenswrapper[5118]: I1208 17:47:22.600725 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6cd9c44569-vhg58"] Dec 08 17:47:22 crc kubenswrapper[5118]: I1208 17:47:22.607484 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6cd9c44569-vhg58" Dec 08 17:47:22 crc kubenswrapper[5118]: I1208 17:47:22.614624 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6cd9c44569-vhg58"] Dec 08 17:47:22 crc kubenswrapper[5118]: I1208 17:47:22.684354 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6f2d1606-a6a8-49a3-87d3-56c3b28b41e0-client-ca\") pod \"controller-manager-6cd9c44569-vhg58\" (UID: \"6f2d1606-a6a8-49a3-87d3-56c3b28b41e0\") " pod="openshift-controller-manager/controller-manager-6cd9c44569-vhg58" Dec 08 17:47:22 crc kubenswrapper[5118]: I1208 17:47:22.684407 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ec00f11-942e-4b18-91f5-5efd88fe3f3a-serving-cert\") pod \"route-controller-manager-6975b9f87f-8vkdj\" (UID: \"0ec00f11-942e-4b18-91f5-5efd88fe3f3a\") " pod="openshift-route-controller-manager/route-controller-manager-6975b9f87f-8vkdj" Dec 08 17:47:22 crc kubenswrapper[5118]: I1208 17:47:22.684425 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0ec00f11-942e-4b18-91f5-5efd88fe3f3a-client-ca\") pod \"route-controller-manager-6975b9f87f-8vkdj\" (UID: \"0ec00f11-942e-4b18-91f5-5efd88fe3f3a\") " pod="openshift-route-controller-manager/route-controller-manager-6975b9f87f-8vkdj" Dec 08 17:47:22 crc kubenswrapper[5118]: I1208 17:47:22.684447 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ec00f11-942e-4b18-91f5-5efd88fe3f3a-config\") pod \"route-controller-manager-6975b9f87f-8vkdj\" (UID: \"0ec00f11-942e-4b18-91f5-5efd88fe3f3a\") " pod="openshift-route-controller-manager/route-controller-manager-6975b9f87f-8vkdj" Dec 08 17:47:22 crc kubenswrapper[5118]: I1208 17:47:22.684470 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/6f2d1606-a6a8-49a3-87d3-56c3b28b41e0-tmp\") pod \"controller-manager-6cd9c44569-vhg58\" (UID: \"6f2d1606-a6a8-49a3-87d3-56c3b28b41e0\") " pod="openshift-controller-manager/controller-manager-6cd9c44569-vhg58" Dec 08 17:47:22 crc kubenswrapper[5118]: I1208 17:47:22.684493 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6f2d1606-a6a8-49a3-87d3-56c3b28b41e0-proxy-ca-bundles\") pod \"controller-manager-6cd9c44569-vhg58\" (UID: \"6f2d1606-a6a8-49a3-87d3-56c3b28b41e0\") " pod="openshift-controller-manager/controller-manager-6cd9c44569-vhg58" Dec 08 17:47:22 crc kubenswrapper[5118]: I1208 17:47:22.684509 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f2d1606-a6a8-49a3-87d3-56c3b28b41e0-config\") pod \"controller-manager-6cd9c44569-vhg58\" (UID: \"6f2d1606-a6a8-49a3-87d3-56c3b28b41e0\") " pod="openshift-controller-manager/controller-manager-6cd9c44569-vhg58" Dec 08 17:47:22 crc kubenswrapper[5118]: I1208 17:47:22.684523 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78gm8\" (UniqueName: \"kubernetes.io/projected/6f2d1606-a6a8-49a3-87d3-56c3b28b41e0-kube-api-access-78gm8\") pod \"controller-manager-6cd9c44569-vhg58\" (UID: \"6f2d1606-a6a8-49a3-87d3-56c3b28b41e0\") " pod="openshift-controller-manager/controller-manager-6cd9c44569-vhg58" Dec 08 17:47:22 crc kubenswrapper[5118]: I1208 17:47:22.684579 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqhcx\" (UniqueName: \"kubernetes.io/projected/0ec00f11-942e-4b18-91f5-5efd88fe3f3a-kube-api-access-bqhcx\") pod \"route-controller-manager-6975b9f87f-8vkdj\" (UID: \"0ec00f11-942e-4b18-91f5-5efd88fe3f3a\") " pod="openshift-route-controller-manager/route-controller-manager-6975b9f87f-8vkdj" Dec 08 17:47:22 crc kubenswrapper[5118]: I1208 17:47:22.684600 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6f2d1606-a6a8-49a3-87d3-56c3b28b41e0-serving-cert\") pod \"controller-manager-6cd9c44569-vhg58\" (UID: \"6f2d1606-a6a8-49a3-87d3-56c3b28b41e0\") " pod="openshift-controller-manager/controller-manager-6cd9c44569-vhg58" Dec 08 17:47:22 crc kubenswrapper[5118]: I1208 17:47:22.684617 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0ec00f11-942e-4b18-91f5-5efd88fe3f3a-tmp\") pod \"route-controller-manager-6975b9f87f-8vkdj\" (UID: \"0ec00f11-942e-4b18-91f5-5efd88fe3f3a\") " pod="openshift-route-controller-manager/route-controller-manager-6975b9f87f-8vkdj" Dec 08 17:47:22 crc kubenswrapper[5118]: I1208 17:47:22.684651 5118 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8dcd2702-e20f-439b-b2c7-27095126b87e-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 17:47:22 crc kubenswrapper[5118]: I1208 17:47:22.684661 5118 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/32bb589d-b6b8-4ab2-a9a2-5bae968bd2c6-tmp\") on node \"crc\" DevicePath \"\"" Dec 08 17:47:22 crc kubenswrapper[5118]: I1208 17:47:22.684670 5118 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8dcd2702-e20f-439b-b2c7-27095126b87e-client-ca\") on node \"crc\" DevicePath \"\"" Dec 08 17:47:22 crc kubenswrapper[5118]: I1208 17:47:22.684678 5118 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8dcd2702-e20f-439b-b2c7-27095126b87e-config\") on node \"crc\" DevicePath \"\"" Dec 08 17:47:22 crc kubenswrapper[5118]: I1208 17:47:22.684686 5118 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/8dcd2702-e20f-439b-b2c7-27095126b87e-tmp\") on node \"crc\" DevicePath \"\"" Dec 08 17:47:22 crc kubenswrapper[5118]: I1208 17:47:22.684693 5118 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/32bb589d-b6b8-4ab2-a9a2-5bae968bd2c6-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 17:47:22 crc kubenswrapper[5118]: I1208 17:47:22.684700 5118 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/32bb589d-b6b8-4ab2-a9a2-5bae968bd2c6-client-ca\") on node \"crc\" DevicePath \"\"" Dec 08 17:47:22 crc kubenswrapper[5118]: I1208 17:47:22.684707 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2t9zh\" (UniqueName: \"kubernetes.io/projected/32bb589d-b6b8-4ab2-a9a2-5bae968bd2c6-kube-api-access-2t9zh\") on node \"crc\" DevicePath \"\"" Dec 08 17:47:22 crc kubenswrapper[5118]: I1208 17:47:22.684715 5118 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8dcd2702-e20f-439b-b2c7-27095126b87e-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 08 17:47:22 crc kubenswrapper[5118]: I1208 17:47:22.684723 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lkd6h\" (UniqueName: \"kubernetes.io/projected/8dcd2702-e20f-439b-b2c7-27095126b87e-kube-api-access-lkd6h\") on node \"crc\" DevicePath \"\"" Dec 08 17:47:22 crc kubenswrapper[5118]: I1208 17:47:22.684731 5118 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32bb589d-b6b8-4ab2-a9a2-5bae968bd2c6-config\") on node \"crc\" DevicePath \"\"" Dec 08 17:47:22 crc kubenswrapper[5118]: I1208 17:47:22.786623 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ec00f11-942e-4b18-91f5-5efd88fe3f3a-serving-cert\") pod \"route-controller-manager-6975b9f87f-8vkdj\" (UID: \"0ec00f11-942e-4b18-91f5-5efd88fe3f3a\") " pod="openshift-route-controller-manager/route-controller-manager-6975b9f87f-8vkdj" Dec 08 17:47:22 crc kubenswrapper[5118]: I1208 17:47:22.786730 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0ec00f11-942e-4b18-91f5-5efd88fe3f3a-client-ca\") pod \"route-controller-manager-6975b9f87f-8vkdj\" (UID: \"0ec00f11-942e-4b18-91f5-5efd88fe3f3a\") " pod="openshift-route-controller-manager/route-controller-manager-6975b9f87f-8vkdj" Dec 08 17:47:22 crc kubenswrapper[5118]: I1208 17:47:22.786814 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ec00f11-942e-4b18-91f5-5efd88fe3f3a-config\") pod \"route-controller-manager-6975b9f87f-8vkdj\" (UID: \"0ec00f11-942e-4b18-91f5-5efd88fe3f3a\") " pod="openshift-route-controller-manager/route-controller-manager-6975b9f87f-8vkdj" Dec 08 17:47:22 crc kubenswrapper[5118]: I1208 17:47:22.786929 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/6f2d1606-a6a8-49a3-87d3-56c3b28b41e0-tmp\") pod \"controller-manager-6cd9c44569-vhg58\" (UID: \"6f2d1606-a6a8-49a3-87d3-56c3b28b41e0\") " pod="openshift-controller-manager/controller-manager-6cd9c44569-vhg58" Dec 08 17:47:22 crc kubenswrapper[5118]: I1208 17:47:22.786999 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6f2d1606-a6a8-49a3-87d3-56c3b28b41e0-proxy-ca-bundles\") pod \"controller-manager-6cd9c44569-vhg58\" (UID: \"6f2d1606-a6a8-49a3-87d3-56c3b28b41e0\") " pod="openshift-controller-manager/controller-manager-6cd9c44569-vhg58" Dec 08 17:47:22 crc kubenswrapper[5118]: I1208 17:47:22.787092 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f2d1606-a6a8-49a3-87d3-56c3b28b41e0-config\") pod \"controller-manager-6cd9c44569-vhg58\" (UID: \"6f2d1606-a6a8-49a3-87d3-56c3b28b41e0\") " pod="openshift-controller-manager/controller-manager-6cd9c44569-vhg58" Dec 08 17:47:22 crc kubenswrapper[5118]: I1208 17:47:22.787147 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-78gm8\" (UniqueName: \"kubernetes.io/projected/6f2d1606-a6a8-49a3-87d3-56c3b28b41e0-kube-api-access-78gm8\") pod \"controller-manager-6cd9c44569-vhg58\" (UID: \"6f2d1606-a6a8-49a3-87d3-56c3b28b41e0\") " pod="openshift-controller-manager/controller-manager-6cd9c44569-vhg58" Dec 08 17:47:22 crc kubenswrapper[5118]: I1208 17:47:22.787277 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bqhcx\" (UniqueName: \"kubernetes.io/projected/0ec00f11-942e-4b18-91f5-5efd88fe3f3a-kube-api-access-bqhcx\") pod \"route-controller-manager-6975b9f87f-8vkdj\" (UID: \"0ec00f11-942e-4b18-91f5-5efd88fe3f3a\") " pod="openshift-route-controller-manager/route-controller-manager-6975b9f87f-8vkdj" Dec 08 17:47:22 crc kubenswrapper[5118]: I1208 17:47:22.787350 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6f2d1606-a6a8-49a3-87d3-56c3b28b41e0-serving-cert\") pod \"controller-manager-6cd9c44569-vhg58\" (UID: \"6f2d1606-a6a8-49a3-87d3-56c3b28b41e0\") " pod="openshift-controller-manager/controller-manager-6cd9c44569-vhg58" Dec 08 17:47:22 crc kubenswrapper[5118]: I1208 17:47:22.787402 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0ec00f11-942e-4b18-91f5-5efd88fe3f3a-tmp\") pod \"route-controller-manager-6975b9f87f-8vkdj\" (UID: \"0ec00f11-942e-4b18-91f5-5efd88fe3f3a\") " pod="openshift-route-controller-manager/route-controller-manager-6975b9f87f-8vkdj" Dec 08 17:47:22 crc kubenswrapper[5118]: I1208 17:47:22.787466 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6f2d1606-a6a8-49a3-87d3-56c3b28b41e0-client-ca\") pod \"controller-manager-6cd9c44569-vhg58\" (UID: \"6f2d1606-a6a8-49a3-87d3-56c3b28b41e0\") " pod="openshift-controller-manager/controller-manager-6cd9c44569-vhg58" Dec 08 17:47:22 crc kubenswrapper[5118]: I1208 17:47:22.787471 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/6f2d1606-a6a8-49a3-87d3-56c3b28b41e0-tmp\") pod \"controller-manager-6cd9c44569-vhg58\" (UID: \"6f2d1606-a6a8-49a3-87d3-56c3b28b41e0\") " pod="openshift-controller-manager/controller-manager-6cd9c44569-vhg58" Dec 08 17:47:22 crc kubenswrapper[5118]: I1208 17:47:22.788076 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0ec00f11-942e-4b18-91f5-5efd88fe3f3a-client-ca\") pod \"route-controller-manager-6975b9f87f-8vkdj\" (UID: \"0ec00f11-942e-4b18-91f5-5efd88fe3f3a\") " pod="openshift-route-controller-manager/route-controller-manager-6975b9f87f-8vkdj" Dec 08 17:47:22 crc kubenswrapper[5118]: I1208 17:47:22.788216 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ec00f11-942e-4b18-91f5-5efd88fe3f3a-config\") pod \"route-controller-manager-6975b9f87f-8vkdj\" (UID: \"0ec00f11-942e-4b18-91f5-5efd88fe3f3a\") " pod="openshift-route-controller-manager/route-controller-manager-6975b9f87f-8vkdj" Dec 08 17:47:22 crc kubenswrapper[5118]: I1208 17:47:22.788500 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0ec00f11-942e-4b18-91f5-5efd88fe3f3a-tmp\") pod \"route-controller-manager-6975b9f87f-8vkdj\" (UID: \"0ec00f11-942e-4b18-91f5-5efd88fe3f3a\") " pod="openshift-route-controller-manager/route-controller-manager-6975b9f87f-8vkdj" Dec 08 17:47:22 crc kubenswrapper[5118]: I1208 17:47:22.788543 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6f2d1606-a6a8-49a3-87d3-56c3b28b41e0-proxy-ca-bundles\") pod \"controller-manager-6cd9c44569-vhg58\" (UID: \"6f2d1606-a6a8-49a3-87d3-56c3b28b41e0\") " pod="openshift-controller-manager/controller-manager-6cd9c44569-vhg58" Dec 08 17:47:22 crc kubenswrapper[5118]: I1208 17:47:22.788727 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f2d1606-a6a8-49a3-87d3-56c3b28b41e0-config\") pod \"controller-manager-6cd9c44569-vhg58\" (UID: \"6f2d1606-a6a8-49a3-87d3-56c3b28b41e0\") " pod="openshift-controller-manager/controller-manager-6cd9c44569-vhg58" Dec 08 17:47:22 crc kubenswrapper[5118]: I1208 17:47:22.789304 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6f2d1606-a6a8-49a3-87d3-56c3b28b41e0-client-ca\") pod \"controller-manager-6cd9c44569-vhg58\" (UID: \"6f2d1606-a6a8-49a3-87d3-56c3b28b41e0\") " pod="openshift-controller-manager/controller-manager-6cd9c44569-vhg58" Dec 08 17:47:22 crc kubenswrapper[5118]: I1208 17:47:22.790767 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ec00f11-942e-4b18-91f5-5efd88fe3f3a-serving-cert\") pod \"route-controller-manager-6975b9f87f-8vkdj\" (UID: \"0ec00f11-942e-4b18-91f5-5efd88fe3f3a\") " pod="openshift-route-controller-manager/route-controller-manager-6975b9f87f-8vkdj" Dec 08 17:47:22 crc kubenswrapper[5118]: I1208 17:47:22.794073 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6f2d1606-a6a8-49a3-87d3-56c3b28b41e0-serving-cert\") pod \"controller-manager-6cd9c44569-vhg58\" (UID: \"6f2d1606-a6a8-49a3-87d3-56c3b28b41e0\") " pod="openshift-controller-manager/controller-manager-6cd9c44569-vhg58" Dec 08 17:47:22 crc kubenswrapper[5118]: I1208 17:47:22.805098 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqhcx\" (UniqueName: \"kubernetes.io/projected/0ec00f11-942e-4b18-91f5-5efd88fe3f3a-kube-api-access-bqhcx\") pod \"route-controller-manager-6975b9f87f-8vkdj\" (UID: \"0ec00f11-942e-4b18-91f5-5efd88fe3f3a\") " pod="openshift-route-controller-manager/route-controller-manager-6975b9f87f-8vkdj" Dec 08 17:47:22 crc kubenswrapper[5118]: I1208 17:47:22.805413 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-78gm8\" (UniqueName: \"kubernetes.io/projected/6f2d1606-a6a8-49a3-87d3-56c3b28b41e0-kube-api-access-78gm8\") pod \"controller-manager-6cd9c44569-vhg58\" (UID: \"6f2d1606-a6a8-49a3-87d3-56c3b28b41e0\") " pod="openshift-controller-manager/controller-manager-6cd9c44569-vhg58" Dec 08 17:47:22 crc kubenswrapper[5118]: I1208 17:47:22.892093 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6975b9f87f-8vkdj" Dec 08 17:47:22 crc kubenswrapper[5118]: I1208 17:47:22.925678 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6cd9c44569-vhg58" Dec 08 17:47:23 crc kubenswrapper[5118]: I1208 17:47:23.012558 5118 generic.go:358] "Generic (PLEG): container finished" podID="8dcd2702-e20f-439b-b2c7-27095126b87e" containerID="3f14ba348594a64bde7d8f58092259d152ec3a1780af9beb54de7fa70aa50ecb" exitCode=0 Dec 08 17:47:23 crc kubenswrapper[5118]: I1208 17:47:23.012654 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-6wjgz" event={"ID":"8dcd2702-e20f-439b-b2c7-27095126b87e","Type":"ContainerDied","Data":"3f14ba348594a64bde7d8f58092259d152ec3a1780af9beb54de7fa70aa50ecb"} Dec 08 17:47:23 crc kubenswrapper[5118]: I1208 17:47:23.012678 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-6wjgz" event={"ID":"8dcd2702-e20f-439b-b2c7-27095126b87e","Type":"ContainerDied","Data":"a52870906a18720d6272a3d6961d0db095af769bae361b4b65db5b6303cb885d"} Dec 08 17:47:23 crc kubenswrapper[5118]: I1208 17:47:23.012694 5118 scope.go:117] "RemoveContainer" containerID="3f14ba348594a64bde7d8f58092259d152ec3a1780af9beb54de7fa70aa50ecb" Dec 08 17:47:23 crc kubenswrapper[5118]: I1208 17:47:23.012798 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-6wjgz" Dec 08 17:47:23 crc kubenswrapper[5118]: I1208 17:47:23.045333 5118 generic.go:358] "Generic (PLEG): container finished" podID="32bb589d-b6b8-4ab2-a9a2-5bae968bd2c6" containerID="c43168f83a6d51b5e882078c0183a3effc8258c2f200874ceb15ad3cc30aad5f" exitCode=0 Dec 08 17:47:23 crc kubenswrapper[5118]: I1208 17:47:23.045386 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-qkg2q" event={"ID":"32bb589d-b6b8-4ab2-a9a2-5bae968bd2c6","Type":"ContainerDied","Data":"c43168f83a6d51b5e882078c0183a3effc8258c2f200874ceb15ad3cc30aad5f"} Dec 08 17:47:23 crc kubenswrapper[5118]: I1208 17:47:23.045439 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-qkg2q" event={"ID":"32bb589d-b6b8-4ab2-a9a2-5bae968bd2c6","Type":"ContainerDied","Data":"68c350dfaec5080e8a88faabfaf27154a6c5538a37e7bd8bd70c0353c8cdd2ad"} Dec 08 17:47:23 crc kubenswrapper[5118]: I1208 17:47:23.045460 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-qkg2q" Dec 08 17:47:23 crc kubenswrapper[5118]: I1208 17:47:23.072785 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-6wjgz"] Dec 08 17:47:23 crc kubenswrapper[5118]: I1208 17:47:23.078376 5118 scope.go:117] "RemoveContainer" containerID="3f14ba348594a64bde7d8f58092259d152ec3a1780af9beb54de7fa70aa50ecb" Dec 08 17:47:23 crc kubenswrapper[5118]: E1208 17:47:23.078817 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f14ba348594a64bde7d8f58092259d152ec3a1780af9beb54de7fa70aa50ecb\": container with ID starting with 3f14ba348594a64bde7d8f58092259d152ec3a1780af9beb54de7fa70aa50ecb not found: ID does not exist" containerID="3f14ba348594a64bde7d8f58092259d152ec3a1780af9beb54de7fa70aa50ecb" Dec 08 17:47:23 crc kubenswrapper[5118]: I1208 17:47:23.078855 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f14ba348594a64bde7d8f58092259d152ec3a1780af9beb54de7fa70aa50ecb"} err="failed to get container status \"3f14ba348594a64bde7d8f58092259d152ec3a1780af9beb54de7fa70aa50ecb\": rpc error: code = NotFound desc = could not find container \"3f14ba348594a64bde7d8f58092259d152ec3a1780af9beb54de7fa70aa50ecb\": container with ID starting with 3f14ba348594a64bde7d8f58092259d152ec3a1780af9beb54de7fa70aa50ecb not found: ID does not exist" Dec 08 17:47:23 crc kubenswrapper[5118]: I1208 17:47:23.079202 5118 scope.go:117] "RemoveContainer" containerID="c43168f83a6d51b5e882078c0183a3effc8258c2f200874ceb15ad3cc30aad5f" Dec 08 17:47:23 crc kubenswrapper[5118]: I1208 17:47:23.092073 5118 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-6wjgz"] Dec 08 17:47:23 crc kubenswrapper[5118]: I1208 17:47:23.099436 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-qkg2q"] Dec 08 17:47:23 crc kubenswrapper[5118]: I1208 17:47:23.103108 5118 scope.go:117] "RemoveContainer" containerID="c43168f83a6d51b5e882078c0183a3effc8258c2f200874ceb15ad3cc30aad5f" Dec 08 17:47:23 crc kubenswrapper[5118]: E1208 17:47:23.104303 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c43168f83a6d51b5e882078c0183a3effc8258c2f200874ceb15ad3cc30aad5f\": container with ID starting with c43168f83a6d51b5e882078c0183a3effc8258c2f200874ceb15ad3cc30aad5f not found: ID does not exist" containerID="c43168f83a6d51b5e882078c0183a3effc8258c2f200874ceb15ad3cc30aad5f" Dec 08 17:47:23 crc kubenswrapper[5118]: I1208 17:47:23.104342 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c43168f83a6d51b5e882078c0183a3effc8258c2f200874ceb15ad3cc30aad5f"} err="failed to get container status \"c43168f83a6d51b5e882078c0183a3effc8258c2f200874ceb15ad3cc30aad5f\": rpc error: code = NotFound desc = could not find container \"c43168f83a6d51b5e882078c0183a3effc8258c2f200874ceb15ad3cc30aad5f\": container with ID starting with c43168f83a6d51b5e882078c0183a3effc8258c2f200874ceb15ad3cc30aad5f not found: ID does not exist" Dec 08 17:47:23 crc kubenswrapper[5118]: I1208 17:47:23.105977 5118 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-qkg2q"] Dec 08 17:47:23 crc kubenswrapper[5118]: I1208 17:47:23.134472 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6975b9f87f-8vkdj"] Dec 08 17:47:23 crc kubenswrapper[5118]: W1208 17:47:23.141009 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0ec00f11_942e_4b18_91f5_5efd88fe3f3a.slice/crio-808736aa9807f44016256ad7e368d5805041910512efa8731f76387b642e9e6a WatchSource:0}: Error finding container 808736aa9807f44016256ad7e368d5805041910512efa8731f76387b642e9e6a: Status 404 returned error can't find the container with id 808736aa9807f44016256ad7e368d5805041910512efa8731f76387b642e9e6a Dec 08 17:47:23 crc kubenswrapper[5118]: I1208 17:47:23.169077 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6cd9c44569-vhg58"] Dec 08 17:47:23 crc kubenswrapper[5118]: W1208 17:47:23.172287 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6f2d1606_a6a8_49a3_87d3_56c3b28b41e0.slice/crio-a691a901a94f282e4180b4900b8d8b9c89e8509d34eae05f4127639012410520 WatchSource:0}: Error finding container a691a901a94f282e4180b4900b8d8b9c89e8509d34eae05f4127639012410520: Status 404 returned error can't find the container with id a691a901a94f282e4180b4900b8d8b9c89e8509d34eae05f4127639012410520 Dec 08 17:47:23 crc kubenswrapper[5118]: I1208 17:47:23.269795 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6cd9c44569-vhg58"] Dec 08 17:47:23 crc kubenswrapper[5118]: I1208 17:47:23.436392 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32bb589d-b6b8-4ab2-a9a2-5bae968bd2c6" path="/var/lib/kubelet/pods/32bb589d-b6b8-4ab2-a9a2-5bae968bd2c6/volumes" Dec 08 17:47:23 crc kubenswrapper[5118]: I1208 17:47:23.437084 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8dcd2702-e20f-439b-b2c7-27095126b87e" path="/var/lib/kubelet/pods/8dcd2702-e20f-439b-b2c7-27095126b87e/volumes" Dec 08 17:47:23 crc kubenswrapper[5118]: I1208 17:47:23.567087 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Dec 08 17:47:23 crc kubenswrapper[5118]: I1208 17:47:23.567596 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Dec 08 17:47:24 crc kubenswrapper[5118]: I1208 17:47:24.051631 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6975b9f87f-8vkdj" event={"ID":"0ec00f11-942e-4b18-91f5-5efd88fe3f3a","Type":"ContainerStarted","Data":"d20f84b7906e58a59401cd451efd71d034119d9e872f1a78f98333c24c981da5"} Dec 08 17:47:24 crc kubenswrapper[5118]: I1208 17:47:24.051702 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-6975b9f87f-8vkdj" Dec 08 17:47:24 crc kubenswrapper[5118]: I1208 17:47:24.051722 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6975b9f87f-8vkdj" event={"ID":"0ec00f11-942e-4b18-91f5-5efd88fe3f3a","Type":"ContainerStarted","Data":"808736aa9807f44016256ad7e368d5805041910512efa8731f76387b642e9e6a"} Dec 08 17:47:24 crc kubenswrapper[5118]: I1208 17:47:24.054018 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6cd9c44569-vhg58" event={"ID":"6f2d1606-a6a8-49a3-87d3-56c3b28b41e0","Type":"ContainerStarted","Data":"20927b20f189da4d3f7331149a570309197e4153d226d93b6724203775de5019"} Dec 08 17:47:24 crc kubenswrapper[5118]: I1208 17:47:24.054074 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6cd9c44569-vhg58" event={"ID":"6f2d1606-a6a8-49a3-87d3-56c3b28b41e0","Type":"ContainerStarted","Data":"a691a901a94f282e4180b4900b8d8b9c89e8509d34eae05f4127639012410520"} Dec 08 17:47:24 crc kubenswrapper[5118]: I1208 17:47:24.054072 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-6cd9c44569-vhg58" podUID="6f2d1606-a6a8-49a3-87d3-56c3b28b41e0" containerName="controller-manager" containerID="cri-o://20927b20f189da4d3f7331149a570309197e4153d226d93b6724203775de5019" gracePeriod=30 Dec 08 17:47:24 crc kubenswrapper[5118]: I1208 17:47:24.054188 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-6cd9c44569-vhg58" Dec 08 17:47:24 crc kubenswrapper[5118]: I1208 17:47:24.062105 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6cd9c44569-vhg58" Dec 08 17:47:24 crc kubenswrapper[5118]: I1208 17:47:24.063286 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6975b9f87f-8vkdj" Dec 08 17:47:24 crc kubenswrapper[5118]: I1208 17:47:24.091402 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6975b9f87f-8vkdj" podStartSLOduration=2.091383521 podStartE2EDuration="2.091383521s" podCreationTimestamp="2025-12-08 17:47:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:47:24.073243271 +0000 UTC m=+300.974567375" watchObservedRunningTime="2025-12-08 17:47:24.091383521 +0000 UTC m=+300.992707615" Dec 08 17:47:24 crc kubenswrapper[5118]: I1208 17:47:24.095508 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6cd9c44569-vhg58" podStartSLOduration=2.095489291 podStartE2EDuration="2.095489291s" podCreationTimestamp="2025-12-08 17:47:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:47:24.090902666 +0000 UTC m=+300.992226780" watchObservedRunningTime="2025-12-08 17:47:24.095489291 +0000 UTC m=+300.996813385" Dec 08 17:47:24 crc kubenswrapper[5118]: I1208 17:47:24.399754 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6cd9c44569-vhg58" Dec 08 17:47:24 crc kubenswrapper[5118]: I1208 17:47:24.430807 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5cb6f9d449-mjxkv"] Dec 08 17:47:24 crc kubenswrapper[5118]: I1208 17:47:24.431477 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6f2d1606-a6a8-49a3-87d3-56c3b28b41e0" containerName="controller-manager" Dec 08 17:47:24 crc kubenswrapper[5118]: I1208 17:47:24.431500 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f2d1606-a6a8-49a3-87d3-56c3b28b41e0" containerName="controller-manager" Dec 08 17:47:24 crc kubenswrapper[5118]: I1208 17:47:24.431633 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="6f2d1606-a6a8-49a3-87d3-56c3b28b41e0" containerName="controller-manager" Dec 08 17:47:24 crc kubenswrapper[5118]: I1208 17:47:24.435212 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5cb6f9d449-mjxkv" Dec 08 17:47:24 crc kubenswrapper[5118]: I1208 17:47:24.443308 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5cb6f9d449-mjxkv"] Dec 08 17:47:24 crc kubenswrapper[5118]: I1208 17:47:24.510088 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6f2d1606-a6a8-49a3-87d3-56c3b28b41e0-serving-cert\") pod \"6f2d1606-a6a8-49a3-87d3-56c3b28b41e0\" (UID: \"6f2d1606-a6a8-49a3-87d3-56c3b28b41e0\") " Dec 08 17:47:24 crc kubenswrapper[5118]: I1208 17:47:24.510225 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6f2d1606-a6a8-49a3-87d3-56c3b28b41e0-proxy-ca-bundles\") pod \"6f2d1606-a6a8-49a3-87d3-56c3b28b41e0\" (UID: \"6f2d1606-a6a8-49a3-87d3-56c3b28b41e0\") " Dec 08 17:47:24 crc kubenswrapper[5118]: I1208 17:47:24.510272 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-78gm8\" (UniqueName: \"kubernetes.io/projected/6f2d1606-a6a8-49a3-87d3-56c3b28b41e0-kube-api-access-78gm8\") pod \"6f2d1606-a6a8-49a3-87d3-56c3b28b41e0\" (UID: \"6f2d1606-a6a8-49a3-87d3-56c3b28b41e0\") " Dec 08 17:47:24 crc kubenswrapper[5118]: I1208 17:47:24.510298 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/6f2d1606-a6a8-49a3-87d3-56c3b28b41e0-tmp\") pod \"6f2d1606-a6a8-49a3-87d3-56c3b28b41e0\" (UID: \"6f2d1606-a6a8-49a3-87d3-56c3b28b41e0\") " Dec 08 17:47:24 crc kubenswrapper[5118]: I1208 17:47:24.510371 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6f2d1606-a6a8-49a3-87d3-56c3b28b41e0-client-ca\") pod \"6f2d1606-a6a8-49a3-87d3-56c3b28b41e0\" (UID: \"6f2d1606-a6a8-49a3-87d3-56c3b28b41e0\") " Dec 08 17:47:24 crc kubenswrapper[5118]: I1208 17:47:24.510393 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f2d1606-a6a8-49a3-87d3-56c3b28b41e0-config\") pod \"6f2d1606-a6a8-49a3-87d3-56c3b28b41e0\" (UID: \"6f2d1606-a6a8-49a3-87d3-56c3b28b41e0\") " Dec 08 17:47:24 crc kubenswrapper[5118]: I1208 17:47:24.510518 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bb242c6c-f6d4-4c20-b143-aaf339af083f-proxy-ca-bundles\") pod \"controller-manager-5cb6f9d449-mjxkv\" (UID: \"bb242c6c-f6d4-4c20-b143-aaf339af083f\") " pod="openshift-controller-manager/controller-manager-5cb6f9d449-mjxkv" Dec 08 17:47:24 crc kubenswrapper[5118]: I1208 17:47:24.510581 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6nkb\" (UniqueName: \"kubernetes.io/projected/bb242c6c-f6d4-4c20-b143-aaf339af083f-kube-api-access-z6nkb\") pod \"controller-manager-5cb6f9d449-mjxkv\" (UID: \"bb242c6c-f6d4-4c20-b143-aaf339af083f\") " pod="openshift-controller-manager/controller-manager-5cb6f9d449-mjxkv" Dec 08 17:47:24 crc kubenswrapper[5118]: I1208 17:47:24.510620 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bb242c6c-f6d4-4c20-b143-aaf339af083f-serving-cert\") pod \"controller-manager-5cb6f9d449-mjxkv\" (UID: \"bb242c6c-f6d4-4c20-b143-aaf339af083f\") " pod="openshift-controller-manager/controller-manager-5cb6f9d449-mjxkv" Dec 08 17:47:24 crc kubenswrapper[5118]: I1208 17:47:24.510647 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bb242c6c-f6d4-4c20-b143-aaf339af083f-client-ca\") pod \"controller-manager-5cb6f9d449-mjxkv\" (UID: \"bb242c6c-f6d4-4c20-b143-aaf339af083f\") " pod="openshift-controller-manager/controller-manager-5cb6f9d449-mjxkv" Dec 08 17:47:24 crc kubenswrapper[5118]: I1208 17:47:24.510868 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6f2d1606-a6a8-49a3-87d3-56c3b28b41e0-tmp" (OuterVolumeSpecName: "tmp") pod "6f2d1606-a6a8-49a3-87d3-56c3b28b41e0" (UID: "6f2d1606-a6a8-49a3-87d3-56c3b28b41e0"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:47:24 crc kubenswrapper[5118]: I1208 17:47:24.510901 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb242c6c-f6d4-4c20-b143-aaf339af083f-config\") pod \"controller-manager-5cb6f9d449-mjxkv\" (UID: \"bb242c6c-f6d4-4c20-b143-aaf339af083f\") " pod="openshift-controller-manager/controller-manager-5cb6f9d449-mjxkv" Dec 08 17:47:24 crc kubenswrapper[5118]: I1208 17:47:24.510975 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/bb242c6c-f6d4-4c20-b143-aaf339af083f-tmp\") pod \"controller-manager-5cb6f9d449-mjxkv\" (UID: \"bb242c6c-f6d4-4c20-b143-aaf339af083f\") " pod="openshift-controller-manager/controller-manager-5cb6f9d449-mjxkv" Dec 08 17:47:24 crc kubenswrapper[5118]: I1208 17:47:24.511028 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f2d1606-a6a8-49a3-87d3-56c3b28b41e0-client-ca" (OuterVolumeSpecName: "client-ca") pod "6f2d1606-a6a8-49a3-87d3-56c3b28b41e0" (UID: "6f2d1606-a6a8-49a3-87d3-56c3b28b41e0"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:47:24 crc kubenswrapper[5118]: I1208 17:47:24.511102 5118 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6f2d1606-a6a8-49a3-87d3-56c3b28b41e0-client-ca\") on node \"crc\" DevicePath \"\"" Dec 08 17:47:24 crc kubenswrapper[5118]: I1208 17:47:24.511117 5118 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/6f2d1606-a6a8-49a3-87d3-56c3b28b41e0-tmp\") on node \"crc\" DevicePath \"\"" Dec 08 17:47:24 crc kubenswrapper[5118]: I1208 17:47:24.511109 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f2d1606-a6a8-49a3-87d3-56c3b28b41e0-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "6f2d1606-a6a8-49a3-87d3-56c3b28b41e0" (UID: "6f2d1606-a6a8-49a3-87d3-56c3b28b41e0"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:47:24 crc kubenswrapper[5118]: I1208 17:47:24.511213 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f2d1606-a6a8-49a3-87d3-56c3b28b41e0-config" (OuterVolumeSpecName: "config") pod "6f2d1606-a6a8-49a3-87d3-56c3b28b41e0" (UID: "6f2d1606-a6a8-49a3-87d3-56c3b28b41e0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:47:24 crc kubenswrapper[5118]: I1208 17:47:24.515506 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f2d1606-a6a8-49a3-87d3-56c3b28b41e0-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6f2d1606-a6a8-49a3-87d3-56c3b28b41e0" (UID: "6f2d1606-a6a8-49a3-87d3-56c3b28b41e0"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:47:24 crc kubenswrapper[5118]: I1208 17:47:24.519053 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f2d1606-a6a8-49a3-87d3-56c3b28b41e0-kube-api-access-78gm8" (OuterVolumeSpecName: "kube-api-access-78gm8") pod "6f2d1606-a6a8-49a3-87d3-56c3b28b41e0" (UID: "6f2d1606-a6a8-49a3-87d3-56c3b28b41e0"). InnerVolumeSpecName "kube-api-access-78gm8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:47:24 crc kubenswrapper[5118]: I1208 17:47:24.612360 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb242c6c-f6d4-4c20-b143-aaf339af083f-config\") pod \"controller-manager-5cb6f9d449-mjxkv\" (UID: \"bb242c6c-f6d4-4c20-b143-aaf339af083f\") " pod="openshift-controller-manager/controller-manager-5cb6f9d449-mjxkv" Dec 08 17:47:24 crc kubenswrapper[5118]: I1208 17:47:24.612406 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/bb242c6c-f6d4-4c20-b143-aaf339af083f-tmp\") pod \"controller-manager-5cb6f9d449-mjxkv\" (UID: \"bb242c6c-f6d4-4c20-b143-aaf339af083f\") " pod="openshift-controller-manager/controller-manager-5cb6f9d449-mjxkv" Dec 08 17:47:24 crc kubenswrapper[5118]: I1208 17:47:24.612471 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bb242c6c-f6d4-4c20-b143-aaf339af083f-proxy-ca-bundles\") pod \"controller-manager-5cb6f9d449-mjxkv\" (UID: \"bb242c6c-f6d4-4c20-b143-aaf339af083f\") " pod="openshift-controller-manager/controller-manager-5cb6f9d449-mjxkv" Dec 08 17:47:24 crc kubenswrapper[5118]: I1208 17:47:24.612510 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-z6nkb\" (UniqueName: \"kubernetes.io/projected/bb242c6c-f6d4-4c20-b143-aaf339af083f-kube-api-access-z6nkb\") pod \"controller-manager-5cb6f9d449-mjxkv\" (UID: \"bb242c6c-f6d4-4c20-b143-aaf339af083f\") " pod="openshift-controller-manager/controller-manager-5cb6f9d449-mjxkv" Dec 08 17:47:24 crc kubenswrapper[5118]: I1208 17:47:24.612538 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bb242c6c-f6d4-4c20-b143-aaf339af083f-serving-cert\") pod \"controller-manager-5cb6f9d449-mjxkv\" (UID: \"bb242c6c-f6d4-4c20-b143-aaf339af083f\") " pod="openshift-controller-manager/controller-manager-5cb6f9d449-mjxkv" Dec 08 17:47:24 crc kubenswrapper[5118]: I1208 17:47:24.612560 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bb242c6c-f6d4-4c20-b143-aaf339af083f-client-ca\") pod \"controller-manager-5cb6f9d449-mjxkv\" (UID: \"bb242c6c-f6d4-4c20-b143-aaf339af083f\") " pod="openshift-controller-manager/controller-manager-5cb6f9d449-mjxkv" Dec 08 17:47:24 crc kubenswrapper[5118]: I1208 17:47:24.612624 5118 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f2d1606-a6a8-49a3-87d3-56c3b28b41e0-config\") on node \"crc\" DevicePath \"\"" Dec 08 17:47:24 crc kubenswrapper[5118]: I1208 17:47:24.612636 5118 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6f2d1606-a6a8-49a3-87d3-56c3b28b41e0-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 17:47:24 crc kubenswrapper[5118]: I1208 17:47:24.612648 5118 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6f2d1606-a6a8-49a3-87d3-56c3b28b41e0-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 08 17:47:24 crc kubenswrapper[5118]: I1208 17:47:24.612661 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-78gm8\" (UniqueName: \"kubernetes.io/projected/6f2d1606-a6a8-49a3-87d3-56c3b28b41e0-kube-api-access-78gm8\") on node \"crc\" DevicePath \"\"" Dec 08 17:47:24 crc kubenswrapper[5118]: I1208 17:47:24.613783 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bb242c6c-f6d4-4c20-b143-aaf339af083f-client-ca\") pod \"controller-manager-5cb6f9d449-mjxkv\" (UID: \"bb242c6c-f6d4-4c20-b143-aaf339af083f\") " pod="openshift-controller-manager/controller-manager-5cb6f9d449-mjxkv" Dec 08 17:47:24 crc kubenswrapper[5118]: I1208 17:47:24.615020 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb242c6c-f6d4-4c20-b143-aaf339af083f-config\") pod \"controller-manager-5cb6f9d449-mjxkv\" (UID: \"bb242c6c-f6d4-4c20-b143-aaf339af083f\") " pod="openshift-controller-manager/controller-manager-5cb6f9d449-mjxkv" Dec 08 17:47:24 crc kubenswrapper[5118]: I1208 17:47:24.615714 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bb242c6c-f6d4-4c20-b143-aaf339af083f-proxy-ca-bundles\") pod \"controller-manager-5cb6f9d449-mjxkv\" (UID: \"bb242c6c-f6d4-4c20-b143-aaf339af083f\") " pod="openshift-controller-manager/controller-manager-5cb6f9d449-mjxkv" Dec 08 17:47:24 crc kubenswrapper[5118]: I1208 17:47:24.616365 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/bb242c6c-f6d4-4c20-b143-aaf339af083f-tmp\") pod \"controller-manager-5cb6f9d449-mjxkv\" (UID: \"bb242c6c-f6d4-4c20-b143-aaf339af083f\") " pod="openshift-controller-manager/controller-manager-5cb6f9d449-mjxkv" Dec 08 17:47:24 crc kubenswrapper[5118]: I1208 17:47:24.626639 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bb242c6c-f6d4-4c20-b143-aaf339af083f-serving-cert\") pod \"controller-manager-5cb6f9d449-mjxkv\" (UID: \"bb242c6c-f6d4-4c20-b143-aaf339af083f\") " pod="openshift-controller-manager/controller-manager-5cb6f9d449-mjxkv" Dec 08 17:47:24 crc kubenswrapper[5118]: I1208 17:47:24.637826 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6nkb\" (UniqueName: \"kubernetes.io/projected/bb242c6c-f6d4-4c20-b143-aaf339af083f-kube-api-access-z6nkb\") pod \"controller-manager-5cb6f9d449-mjxkv\" (UID: \"bb242c6c-f6d4-4c20-b143-aaf339af083f\") " pod="openshift-controller-manager/controller-manager-5cb6f9d449-mjxkv" Dec 08 17:47:24 crc kubenswrapper[5118]: I1208 17:47:24.761698 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5cb6f9d449-mjxkv" Dec 08 17:47:24 crc kubenswrapper[5118]: I1208 17:47:24.974186 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5cb6f9d449-mjxkv"] Dec 08 17:47:24 crc kubenswrapper[5118]: W1208 17:47:24.977173 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbb242c6c_f6d4_4c20_b143_aaf339af083f.slice/crio-68baf2802c3eddd9fabce6ce9c52c39ba9ed05121945d576775172a9cf19ae30 WatchSource:0}: Error finding container 68baf2802c3eddd9fabce6ce9c52c39ba9ed05121945d576775172a9cf19ae30: Status 404 returned error can't find the container with id 68baf2802c3eddd9fabce6ce9c52c39ba9ed05121945d576775172a9cf19ae30 Dec 08 17:47:24 crc kubenswrapper[5118]: I1208 17:47:24.979406 5118 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 08 17:47:25 crc kubenswrapper[5118]: I1208 17:47:25.058743 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5cb6f9d449-mjxkv" event={"ID":"bb242c6c-f6d4-4c20-b143-aaf339af083f","Type":"ContainerStarted","Data":"68baf2802c3eddd9fabce6ce9c52c39ba9ed05121945d576775172a9cf19ae30"} Dec 08 17:47:25 crc kubenswrapper[5118]: I1208 17:47:25.059917 5118 generic.go:358] "Generic (PLEG): container finished" podID="6f2d1606-a6a8-49a3-87d3-56c3b28b41e0" containerID="20927b20f189da4d3f7331149a570309197e4153d226d93b6724203775de5019" exitCode=0 Dec 08 17:47:25 crc kubenswrapper[5118]: I1208 17:47:25.060355 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6cd9c44569-vhg58" event={"ID":"6f2d1606-a6a8-49a3-87d3-56c3b28b41e0","Type":"ContainerDied","Data":"20927b20f189da4d3f7331149a570309197e4153d226d93b6724203775de5019"} Dec 08 17:47:25 crc kubenswrapper[5118]: I1208 17:47:25.060405 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6cd9c44569-vhg58" event={"ID":"6f2d1606-a6a8-49a3-87d3-56c3b28b41e0","Type":"ContainerDied","Data":"a691a901a94f282e4180b4900b8d8b9c89e8509d34eae05f4127639012410520"} Dec 08 17:47:25 crc kubenswrapper[5118]: I1208 17:47:25.060422 5118 scope.go:117] "RemoveContainer" containerID="20927b20f189da4d3f7331149a570309197e4153d226d93b6724203775de5019" Dec 08 17:47:25 crc kubenswrapper[5118]: I1208 17:47:25.060421 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6cd9c44569-vhg58" Dec 08 17:47:25 crc kubenswrapper[5118]: I1208 17:47:25.078045 5118 scope.go:117] "RemoveContainer" containerID="20927b20f189da4d3f7331149a570309197e4153d226d93b6724203775de5019" Dec 08 17:47:25 crc kubenswrapper[5118]: E1208 17:47:25.078866 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"20927b20f189da4d3f7331149a570309197e4153d226d93b6724203775de5019\": container with ID starting with 20927b20f189da4d3f7331149a570309197e4153d226d93b6724203775de5019 not found: ID does not exist" containerID="20927b20f189da4d3f7331149a570309197e4153d226d93b6724203775de5019" Dec 08 17:47:25 crc kubenswrapper[5118]: I1208 17:47:25.078936 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"20927b20f189da4d3f7331149a570309197e4153d226d93b6724203775de5019"} err="failed to get container status \"20927b20f189da4d3f7331149a570309197e4153d226d93b6724203775de5019\": rpc error: code = NotFound desc = could not find container \"20927b20f189da4d3f7331149a570309197e4153d226d93b6724203775de5019\": container with ID starting with 20927b20f189da4d3f7331149a570309197e4153d226d93b6724203775de5019 not found: ID does not exist" Dec 08 17:47:25 crc kubenswrapper[5118]: I1208 17:47:25.089008 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6cd9c44569-vhg58"] Dec 08 17:47:25 crc kubenswrapper[5118]: I1208 17:47:25.092182 5118 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6cd9c44569-vhg58"] Dec 08 17:47:25 crc kubenswrapper[5118]: I1208 17:47:25.434756 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f2d1606-a6a8-49a3-87d3-56c3b28b41e0" path="/var/lib/kubelet/pods/6f2d1606-a6a8-49a3-87d3-56c3b28b41e0/volumes" Dec 08 17:47:25 crc kubenswrapper[5118]: I1208 17:47:25.874803 5118 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 08 17:47:26 crc kubenswrapper[5118]: I1208 17:47:26.068939 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5cb6f9d449-mjxkv" event={"ID":"bb242c6c-f6d4-4c20-b143-aaf339af083f","Type":"ContainerStarted","Data":"385ff854e8e4e173a04212de3b1c8bf62112abd457bf02cc1e7a97fc0b99a522"} Dec 08 17:47:26 crc kubenswrapper[5118]: I1208 17:47:26.069072 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-5cb6f9d449-mjxkv" Dec 08 17:47:26 crc kubenswrapper[5118]: I1208 17:47:26.075916 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5cb6f9d449-mjxkv" Dec 08 17:47:26 crc kubenswrapper[5118]: I1208 17:47:26.094774 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5cb6f9d449-mjxkv" podStartSLOduration=3.094748855 podStartE2EDuration="3.094748855s" podCreationTimestamp="2025-12-08 17:47:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:47:26.08662524 +0000 UTC m=+302.987949344" watchObservedRunningTime="2025-12-08 17:47:26.094748855 +0000 UTC m=+302.996072979" Dec 08 17:47:31 crc kubenswrapper[5118]: I1208 17:47:31.962668 5118 patch_prober.go:28] interesting pod/machine-config-daemon-8vxnt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 17:47:31 crc kubenswrapper[5118]: I1208 17:47:31.963149 5118 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8vxnt" podUID="cee6a3dc-47d4-4996-9c78-cb6c6b626d71" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 17:47:31 crc kubenswrapper[5118]: I1208 17:47:31.963221 5118 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8vxnt" Dec 08 17:47:31 crc kubenswrapper[5118]: I1208 17:47:31.964103 5118 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d85cd5695eb2bfdc0550d3965b70689a69b9c315b96786c2d8f2213d1fc4d407"} pod="openshift-machine-config-operator/machine-config-daemon-8vxnt" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 08 17:47:31 crc kubenswrapper[5118]: I1208 17:47:31.964211 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8vxnt" podUID="cee6a3dc-47d4-4996-9c78-cb6c6b626d71" containerName="machine-config-daemon" containerID="cri-o://d85cd5695eb2bfdc0550d3965b70689a69b9c315b96786c2d8f2213d1fc4d407" gracePeriod=600 Dec 08 17:47:34 crc kubenswrapper[5118]: I1208 17:47:34.125762 5118 generic.go:358] "Generic (PLEG): container finished" podID="cee6a3dc-47d4-4996-9c78-cb6c6b626d71" containerID="d85cd5695eb2bfdc0550d3965b70689a69b9c315b96786c2d8f2213d1fc4d407" exitCode=0 Dec 08 17:47:34 crc kubenswrapper[5118]: I1208 17:47:34.126065 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8vxnt" event={"ID":"cee6a3dc-47d4-4996-9c78-cb6c6b626d71","Type":"ContainerDied","Data":"d85cd5695eb2bfdc0550d3965b70689a69b9c315b96786c2d8f2213d1fc4d407"} Dec 08 17:47:35 crc kubenswrapper[5118]: I1208 17:47:35.135695 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8vxnt" event={"ID":"cee6a3dc-47d4-4996-9c78-cb6c6b626d71","Type":"ContainerStarted","Data":"b0ca934293bb401de268428d32fee96419e1934766145fbcb973b04a905f6519"} Dec 08 17:48:03 crc kubenswrapper[5118]: I1208 17:48:03.261200 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6975b9f87f-8vkdj"] Dec 08 17:48:03 crc kubenswrapper[5118]: I1208 17:48:03.262132 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6975b9f87f-8vkdj" podUID="0ec00f11-942e-4b18-91f5-5efd88fe3f3a" containerName="route-controller-manager" containerID="cri-o://d20f84b7906e58a59401cd451efd71d034119d9e872f1a78f98333c24c981da5" gracePeriod=30 Dec 08 17:48:03 crc kubenswrapper[5118]: I1208 17:48:03.455534 5118 generic.go:358] "Generic (PLEG): container finished" podID="0ec00f11-942e-4b18-91f5-5efd88fe3f3a" containerID="d20f84b7906e58a59401cd451efd71d034119d9e872f1a78f98333c24c981da5" exitCode=0 Dec 08 17:48:03 crc kubenswrapper[5118]: I1208 17:48:03.455597 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6975b9f87f-8vkdj" event={"ID":"0ec00f11-942e-4b18-91f5-5efd88fe3f3a","Type":"ContainerDied","Data":"d20f84b7906e58a59401cd451efd71d034119d9e872f1a78f98333c24c981da5"} Dec 08 17:48:03 crc kubenswrapper[5118]: I1208 17:48:03.669722 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6975b9f87f-8vkdj" Dec 08 17:48:03 crc kubenswrapper[5118]: I1208 17:48:03.736613 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0ec00f11-942e-4b18-91f5-5efd88fe3f3a-client-ca\") pod \"0ec00f11-942e-4b18-91f5-5efd88fe3f3a\" (UID: \"0ec00f11-942e-4b18-91f5-5efd88fe3f3a\") " Dec 08 17:48:03 crc kubenswrapper[5118]: I1208 17:48:03.736797 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ec00f11-942e-4b18-91f5-5efd88fe3f3a-config\") pod \"0ec00f11-942e-4b18-91f5-5efd88fe3f3a\" (UID: \"0ec00f11-942e-4b18-91f5-5efd88fe3f3a\") " Dec 08 17:48:03 crc kubenswrapper[5118]: I1208 17:48:03.737003 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ec00f11-942e-4b18-91f5-5efd88fe3f3a-serving-cert\") pod \"0ec00f11-942e-4b18-91f5-5efd88fe3f3a\" (UID: \"0ec00f11-942e-4b18-91f5-5efd88fe3f3a\") " Dec 08 17:48:03 crc kubenswrapper[5118]: I1208 17:48:03.737054 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0ec00f11-942e-4b18-91f5-5efd88fe3f3a-tmp\") pod \"0ec00f11-942e-4b18-91f5-5efd88fe3f3a\" (UID: \"0ec00f11-942e-4b18-91f5-5efd88fe3f3a\") " Dec 08 17:48:03 crc kubenswrapper[5118]: I1208 17:48:03.737080 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bqhcx\" (UniqueName: \"kubernetes.io/projected/0ec00f11-942e-4b18-91f5-5efd88fe3f3a-kube-api-access-bqhcx\") pod \"0ec00f11-942e-4b18-91f5-5efd88fe3f3a\" (UID: \"0ec00f11-942e-4b18-91f5-5efd88fe3f3a\") " Dec 08 17:48:03 crc kubenswrapper[5118]: I1208 17:48:03.738178 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ec00f11-942e-4b18-91f5-5efd88fe3f3a-config" (OuterVolumeSpecName: "config") pod "0ec00f11-942e-4b18-91f5-5efd88fe3f3a" (UID: "0ec00f11-942e-4b18-91f5-5efd88fe3f3a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:48:03 crc kubenswrapper[5118]: I1208 17:48:03.738456 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ec00f11-942e-4b18-91f5-5efd88fe3f3a-tmp" (OuterVolumeSpecName: "tmp") pod "0ec00f11-942e-4b18-91f5-5efd88fe3f3a" (UID: "0ec00f11-942e-4b18-91f5-5efd88fe3f3a"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:48:03 crc kubenswrapper[5118]: I1208 17:48:03.738865 5118 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ec00f11-942e-4b18-91f5-5efd88fe3f3a-config\") on node \"crc\" DevicePath \"\"" Dec 08 17:48:03 crc kubenswrapper[5118]: I1208 17:48:03.738895 5118 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0ec00f11-942e-4b18-91f5-5efd88fe3f3a-tmp\") on node \"crc\" DevicePath \"\"" Dec 08 17:48:03 crc kubenswrapper[5118]: I1208 17:48:03.739050 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ec00f11-942e-4b18-91f5-5efd88fe3f3a-client-ca" (OuterVolumeSpecName: "client-ca") pod "0ec00f11-942e-4b18-91f5-5efd88fe3f3a" (UID: "0ec00f11-942e-4b18-91f5-5efd88fe3f3a"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:48:03 crc kubenswrapper[5118]: I1208 17:48:03.745719 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ec00f11-942e-4b18-91f5-5efd88fe3f3a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0ec00f11-942e-4b18-91f5-5efd88fe3f3a" (UID: "0ec00f11-942e-4b18-91f5-5efd88fe3f3a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:48:03 crc kubenswrapper[5118]: I1208 17:48:03.745963 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ec00f11-942e-4b18-91f5-5efd88fe3f3a-kube-api-access-bqhcx" (OuterVolumeSpecName: "kube-api-access-bqhcx") pod "0ec00f11-942e-4b18-91f5-5efd88fe3f3a" (UID: "0ec00f11-942e-4b18-91f5-5efd88fe3f3a"). InnerVolumeSpecName "kube-api-access-bqhcx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:48:03 crc kubenswrapper[5118]: I1208 17:48:03.756129 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7dd6d6d8c8-wfznc"] Dec 08 17:48:03 crc kubenswrapper[5118]: I1208 17:48:03.757356 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0ec00f11-942e-4b18-91f5-5efd88fe3f3a" containerName="route-controller-manager" Dec 08 17:48:03 crc kubenswrapper[5118]: I1208 17:48:03.757487 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ec00f11-942e-4b18-91f5-5efd88fe3f3a" containerName="route-controller-manager" Dec 08 17:48:03 crc kubenswrapper[5118]: I1208 17:48:03.757679 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="0ec00f11-942e-4b18-91f5-5efd88fe3f3a" containerName="route-controller-manager" Dec 08 17:48:03 crc kubenswrapper[5118]: I1208 17:48:03.839830 5118 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0ec00f11-942e-4b18-91f5-5efd88fe3f3a-client-ca\") on node \"crc\" DevicePath \"\"" Dec 08 17:48:03 crc kubenswrapper[5118]: I1208 17:48:03.840185 5118 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ec00f11-942e-4b18-91f5-5efd88fe3f3a-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 17:48:03 crc kubenswrapper[5118]: I1208 17:48:03.840204 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-bqhcx\" (UniqueName: \"kubernetes.io/projected/0ec00f11-942e-4b18-91f5-5efd88fe3f3a-kube-api-access-bqhcx\") on node \"crc\" DevicePath \"\"" Dec 08 17:48:04 crc kubenswrapper[5118]: I1208 17:48:04.267633 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7dd6d6d8c8-wfznc"] Dec 08 17:48:04 crc kubenswrapper[5118]: I1208 17:48:04.268257 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7dd6d6d8c8-wfznc" Dec 08 17:48:04 crc kubenswrapper[5118]: I1208 17:48:04.347834 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29p6h\" (UniqueName: \"kubernetes.io/projected/0b1ea033-2c13-4941-a658-0129d8822fb2-kube-api-access-29p6h\") pod \"route-controller-manager-7dd6d6d8c8-wfznc\" (UID: \"0b1ea033-2c13-4941-a658-0129d8822fb2\") " pod="openshift-route-controller-manager/route-controller-manager-7dd6d6d8c8-wfznc" Dec 08 17:48:04 crc kubenswrapper[5118]: I1208 17:48:04.347911 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b1ea033-2c13-4941-a658-0129d8822fb2-config\") pod \"route-controller-manager-7dd6d6d8c8-wfznc\" (UID: \"0b1ea033-2c13-4941-a658-0129d8822fb2\") " pod="openshift-route-controller-manager/route-controller-manager-7dd6d6d8c8-wfznc" Dec 08 17:48:04 crc kubenswrapper[5118]: I1208 17:48:04.347930 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0b1ea033-2c13-4941-a658-0129d8822fb2-tmp\") pod \"route-controller-manager-7dd6d6d8c8-wfznc\" (UID: \"0b1ea033-2c13-4941-a658-0129d8822fb2\") " pod="openshift-route-controller-manager/route-controller-manager-7dd6d6d8c8-wfznc" Dec 08 17:48:04 crc kubenswrapper[5118]: I1208 17:48:04.348185 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0b1ea033-2c13-4941-a658-0129d8822fb2-client-ca\") pod \"route-controller-manager-7dd6d6d8c8-wfznc\" (UID: \"0b1ea033-2c13-4941-a658-0129d8822fb2\") " pod="openshift-route-controller-manager/route-controller-manager-7dd6d6d8c8-wfznc" Dec 08 17:48:04 crc kubenswrapper[5118]: I1208 17:48:04.348515 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b1ea033-2c13-4941-a658-0129d8822fb2-serving-cert\") pod \"route-controller-manager-7dd6d6d8c8-wfznc\" (UID: \"0b1ea033-2c13-4941-a658-0129d8822fb2\") " pod="openshift-route-controller-manager/route-controller-manager-7dd6d6d8c8-wfznc" Dec 08 17:48:04 crc kubenswrapper[5118]: I1208 17:48:04.450185 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b1ea033-2c13-4941-a658-0129d8822fb2-config\") pod \"route-controller-manager-7dd6d6d8c8-wfznc\" (UID: \"0b1ea033-2c13-4941-a658-0129d8822fb2\") " pod="openshift-route-controller-manager/route-controller-manager-7dd6d6d8c8-wfznc" Dec 08 17:48:04 crc kubenswrapper[5118]: I1208 17:48:04.450280 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0b1ea033-2c13-4941-a658-0129d8822fb2-tmp\") pod \"route-controller-manager-7dd6d6d8c8-wfznc\" (UID: \"0b1ea033-2c13-4941-a658-0129d8822fb2\") " pod="openshift-route-controller-manager/route-controller-manager-7dd6d6d8c8-wfznc" Dec 08 17:48:04 crc kubenswrapper[5118]: I1208 17:48:04.450848 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0b1ea033-2c13-4941-a658-0129d8822fb2-tmp\") pod \"route-controller-manager-7dd6d6d8c8-wfznc\" (UID: \"0b1ea033-2c13-4941-a658-0129d8822fb2\") " pod="openshift-route-controller-manager/route-controller-manager-7dd6d6d8c8-wfznc" Dec 08 17:48:04 crc kubenswrapper[5118]: I1208 17:48:04.450312 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0b1ea033-2c13-4941-a658-0129d8822fb2-client-ca\") pod \"route-controller-manager-7dd6d6d8c8-wfznc\" (UID: \"0b1ea033-2c13-4941-a658-0129d8822fb2\") " pod="openshift-route-controller-manager/route-controller-manager-7dd6d6d8c8-wfznc" Dec 08 17:48:04 crc kubenswrapper[5118]: I1208 17:48:04.451965 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0b1ea033-2c13-4941-a658-0129d8822fb2-client-ca\") pod \"route-controller-manager-7dd6d6d8c8-wfznc\" (UID: \"0b1ea033-2c13-4941-a658-0129d8822fb2\") " pod="openshift-route-controller-manager/route-controller-manager-7dd6d6d8c8-wfznc" Dec 08 17:48:04 crc kubenswrapper[5118]: I1208 17:48:04.452100 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b1ea033-2c13-4941-a658-0129d8822fb2-serving-cert\") pod \"route-controller-manager-7dd6d6d8c8-wfznc\" (UID: \"0b1ea033-2c13-4941-a658-0129d8822fb2\") " pod="openshift-route-controller-manager/route-controller-manager-7dd6d6d8c8-wfznc" Dec 08 17:48:04 crc kubenswrapper[5118]: I1208 17:48:04.452847 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-29p6h\" (UniqueName: \"kubernetes.io/projected/0b1ea033-2c13-4941-a658-0129d8822fb2-kube-api-access-29p6h\") pod \"route-controller-manager-7dd6d6d8c8-wfznc\" (UID: \"0b1ea033-2c13-4941-a658-0129d8822fb2\") " pod="openshift-route-controller-manager/route-controller-manager-7dd6d6d8c8-wfznc" Dec 08 17:48:04 crc kubenswrapper[5118]: I1208 17:48:04.453427 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b1ea033-2c13-4941-a658-0129d8822fb2-config\") pod \"route-controller-manager-7dd6d6d8c8-wfznc\" (UID: \"0b1ea033-2c13-4941-a658-0129d8822fb2\") " pod="openshift-route-controller-manager/route-controller-manager-7dd6d6d8c8-wfznc" Dec 08 17:48:04 crc kubenswrapper[5118]: I1208 17:48:04.459835 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b1ea033-2c13-4941-a658-0129d8822fb2-serving-cert\") pod \"route-controller-manager-7dd6d6d8c8-wfznc\" (UID: \"0b1ea033-2c13-4941-a658-0129d8822fb2\") " pod="openshift-route-controller-manager/route-controller-manager-7dd6d6d8c8-wfznc" Dec 08 17:48:04 crc kubenswrapper[5118]: I1208 17:48:04.464642 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6975b9f87f-8vkdj" event={"ID":"0ec00f11-942e-4b18-91f5-5efd88fe3f3a","Type":"ContainerDied","Data":"808736aa9807f44016256ad7e368d5805041910512efa8731f76387b642e9e6a"} Dec 08 17:48:04 crc kubenswrapper[5118]: I1208 17:48:04.464706 5118 scope.go:117] "RemoveContainer" containerID="d20f84b7906e58a59401cd451efd71d034119d9e872f1a78f98333c24c981da5" Dec 08 17:48:04 crc kubenswrapper[5118]: I1208 17:48:04.464651 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6975b9f87f-8vkdj" Dec 08 17:48:04 crc kubenswrapper[5118]: I1208 17:48:04.483712 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-29p6h\" (UniqueName: \"kubernetes.io/projected/0b1ea033-2c13-4941-a658-0129d8822fb2-kube-api-access-29p6h\") pod \"route-controller-manager-7dd6d6d8c8-wfznc\" (UID: \"0b1ea033-2c13-4941-a658-0129d8822fb2\") " pod="openshift-route-controller-manager/route-controller-manager-7dd6d6d8c8-wfznc" Dec 08 17:48:04 crc kubenswrapper[5118]: I1208 17:48:04.518386 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6975b9f87f-8vkdj"] Dec 08 17:48:04 crc kubenswrapper[5118]: I1208 17:48:04.521325 5118 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6975b9f87f-8vkdj"] Dec 08 17:48:04 crc kubenswrapper[5118]: I1208 17:48:04.586582 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7dd6d6d8c8-wfznc" Dec 08 17:48:05 crc kubenswrapper[5118]: I1208 17:48:05.023189 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7dd6d6d8c8-wfznc"] Dec 08 17:48:05 crc kubenswrapper[5118]: W1208 17:48:05.044096 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0b1ea033_2c13_4941_a658_0129d8822fb2.slice/crio-eccbf9022c048c422f4d063c310e16f26237a2cd496817eb6b591884401187ef WatchSource:0}: Error finding container eccbf9022c048c422f4d063c310e16f26237a2cd496817eb6b591884401187ef: Status 404 returned error can't find the container with id eccbf9022c048c422f4d063c310e16f26237a2cd496817eb6b591884401187ef Dec 08 17:48:05 crc kubenswrapper[5118]: I1208 17:48:05.442112 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ec00f11-942e-4b18-91f5-5efd88fe3f3a" path="/var/lib/kubelet/pods/0ec00f11-942e-4b18-91f5-5efd88fe3f3a/volumes" Dec 08 17:48:05 crc kubenswrapper[5118]: I1208 17:48:05.473687 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7dd6d6d8c8-wfznc" event={"ID":"0b1ea033-2c13-4941-a658-0129d8822fb2","Type":"ContainerStarted","Data":"1782673b184504ff9c4399cc60963cb66a18133053e33db6fe237689cb40f70b"} Dec 08 17:48:05 crc kubenswrapper[5118]: I1208 17:48:05.473729 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7dd6d6d8c8-wfznc" event={"ID":"0b1ea033-2c13-4941-a658-0129d8822fb2","Type":"ContainerStarted","Data":"eccbf9022c048c422f4d063c310e16f26237a2cd496817eb6b591884401187ef"} Dec 08 17:48:05 crc kubenswrapper[5118]: I1208 17:48:05.474213 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-7dd6d6d8c8-wfznc" Dec 08 17:48:05 crc kubenswrapper[5118]: I1208 17:48:05.498057 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7dd6d6d8c8-wfznc" podStartSLOduration=2.498037445 podStartE2EDuration="2.498037445s" podCreationTimestamp="2025-12-08 17:48:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:48:05.495959412 +0000 UTC m=+342.397283506" watchObservedRunningTime="2025-12-08 17:48:05.498037445 +0000 UTC m=+342.399361549" Dec 08 17:48:06 crc kubenswrapper[5118]: I1208 17:48:06.162510 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7dd6d6d8c8-wfznc" Dec 08 17:48:09 crc kubenswrapper[5118]: I1208 17:48:09.346987 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lxwl6"] Dec 08 17:48:09 crc kubenswrapper[5118]: I1208 17:48:09.348348 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-lxwl6" podUID="fe8486ce-b0ff-43e5-b2a4-3e1d81feeebf" containerName="registry-server" containerID="cri-o://a178a9a0cec87f28ab4326f087a97ff391fb09d7bf9e2b5a008129f49bf869d3" gracePeriod=30 Dec 08 17:48:09 crc kubenswrapper[5118]: I1208 17:48:09.356466 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-r22jf"] Dec 08 17:48:09 crc kubenswrapper[5118]: I1208 17:48:09.356793 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-r22jf" podUID="cb8303fe-2019-44f4-a124-af174b28cc02" containerName="registry-server" containerID="cri-o://19904e29b569326116dde0334fcdcfeccc18c7658bc563e099efdbf4a5c0fe55" gracePeriod=30 Dec 08 17:48:09 crc kubenswrapper[5118]: I1208 17:48:09.379686 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-85wdh"] Dec 08 17:48:09 crc kubenswrapper[5118]: I1208 17:48:09.380174 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-547dbd544d-85wdh" podUID="9af82654-06bc-4376-bff5-d6adacce9785" containerName="marketplace-operator" containerID="cri-o://722c18c7204724e7ab57760c6f7523235484ab7bf0593e7f00c4d92b2730cc13" gracePeriod=30 Dec 08 17:48:09 crc kubenswrapper[5118]: I1208 17:48:09.386557 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rvglb"] Dec 08 17:48:09 crc kubenswrapper[5118]: I1208 17:48:09.386988 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rvglb" podUID="fe467668-8954-4465-87ca-ef1d5f933d43" containerName="registry-server" containerID="cri-o://784aacda702054432bdfdb7856bc84d016576f8ddcb91453c196cae71ca3641e" gracePeriod=30 Dec 08 17:48:09 crc kubenswrapper[5118]: I1208 17:48:09.396616 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-6bbtn"] Dec 08 17:48:09 crc kubenswrapper[5118]: I1208 17:48:09.406326 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zfv6j"] Dec 08 17:48:09 crc kubenswrapper[5118]: I1208 17:48:09.406584 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-zfv6j" podUID="e2c92d64-3525-4675-bbe9-38bfe6dd4504" containerName="registry-server" containerID="cri-o://ef908685a3f65f3c967fd946e3c2b26bf81e80fe6f4bd7d21c7d02fcfaaf1408" gracePeriod=30 Dec 08 17:48:09 crc kubenswrapper[5118]: I1208 17:48:09.406739 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-6bbtn" Dec 08 17:48:09 crc kubenswrapper[5118]: I1208 17:48:09.417442 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-6bbtn"] Dec 08 17:48:09 crc kubenswrapper[5118]: I1208 17:48:09.501229 5118 generic.go:358] "Generic (PLEG): container finished" podID="cb8303fe-2019-44f4-a124-af174b28cc02" containerID="19904e29b569326116dde0334fcdcfeccc18c7658bc563e099efdbf4a5c0fe55" exitCode=0 Dec 08 17:48:09 crc kubenswrapper[5118]: I1208 17:48:09.501323 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r22jf" event={"ID":"cb8303fe-2019-44f4-a124-af174b28cc02","Type":"ContainerDied","Data":"19904e29b569326116dde0334fcdcfeccc18c7658bc563e099efdbf4a5c0fe55"} Dec 08 17:48:09 crc kubenswrapper[5118]: I1208 17:48:09.505525 5118 generic.go:358] "Generic (PLEG): container finished" podID="fe8486ce-b0ff-43e5-b2a4-3e1d81feeebf" containerID="a178a9a0cec87f28ab4326f087a97ff391fb09d7bf9e2b5a008129f49bf869d3" exitCode=0 Dec 08 17:48:09 crc kubenswrapper[5118]: I1208 17:48:09.505670 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lxwl6" event={"ID":"fe8486ce-b0ff-43e5-b2a4-3e1d81feeebf","Type":"ContainerDied","Data":"a178a9a0cec87f28ab4326f087a97ff391fb09d7bf9e2b5a008129f49bf869d3"} Dec 08 17:48:09 crc kubenswrapper[5118]: I1208 17:48:09.516916 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c3f09b88-c9bd-4d0b-9a10-2b2b5f2ea5b1-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-6bbtn\" (UID: \"c3f09b88-c9bd-4d0b-9a10-2b2b5f2ea5b1\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-6bbtn" Dec 08 17:48:09 crc kubenswrapper[5118]: I1208 17:48:09.517075 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c3f09b88-c9bd-4d0b-9a10-2b2b5f2ea5b1-tmp\") pod \"marketplace-operator-547dbd544d-6bbtn\" (UID: \"c3f09b88-c9bd-4d0b-9a10-2b2b5f2ea5b1\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-6bbtn" Dec 08 17:48:09 crc kubenswrapper[5118]: I1208 17:48:09.517406 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46pz5\" (UniqueName: \"kubernetes.io/projected/c3f09b88-c9bd-4d0b-9a10-2b2b5f2ea5b1-kube-api-access-46pz5\") pod \"marketplace-operator-547dbd544d-6bbtn\" (UID: \"c3f09b88-c9bd-4d0b-9a10-2b2b5f2ea5b1\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-6bbtn" Dec 08 17:48:09 crc kubenswrapper[5118]: I1208 17:48:09.517542 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c3f09b88-c9bd-4d0b-9a10-2b2b5f2ea5b1-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-6bbtn\" (UID: \"c3f09b88-c9bd-4d0b-9a10-2b2b5f2ea5b1\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-6bbtn" Dec 08 17:48:09 crc kubenswrapper[5118]: I1208 17:48:09.624392 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c3f09b88-c9bd-4d0b-9a10-2b2b5f2ea5b1-tmp\") pod \"marketplace-operator-547dbd544d-6bbtn\" (UID: \"c3f09b88-c9bd-4d0b-9a10-2b2b5f2ea5b1\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-6bbtn" Dec 08 17:48:09 crc kubenswrapper[5118]: I1208 17:48:09.624473 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-46pz5\" (UniqueName: \"kubernetes.io/projected/c3f09b88-c9bd-4d0b-9a10-2b2b5f2ea5b1-kube-api-access-46pz5\") pod \"marketplace-operator-547dbd544d-6bbtn\" (UID: \"c3f09b88-c9bd-4d0b-9a10-2b2b5f2ea5b1\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-6bbtn" Dec 08 17:48:09 crc kubenswrapper[5118]: I1208 17:48:09.624535 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c3f09b88-c9bd-4d0b-9a10-2b2b5f2ea5b1-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-6bbtn\" (UID: \"c3f09b88-c9bd-4d0b-9a10-2b2b5f2ea5b1\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-6bbtn" Dec 08 17:48:09 crc kubenswrapper[5118]: I1208 17:48:09.624569 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c3f09b88-c9bd-4d0b-9a10-2b2b5f2ea5b1-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-6bbtn\" (UID: \"c3f09b88-c9bd-4d0b-9a10-2b2b5f2ea5b1\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-6bbtn" Dec 08 17:48:09 crc kubenswrapper[5118]: I1208 17:48:09.625593 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c3f09b88-c9bd-4d0b-9a10-2b2b5f2ea5b1-tmp\") pod \"marketplace-operator-547dbd544d-6bbtn\" (UID: \"c3f09b88-c9bd-4d0b-9a10-2b2b5f2ea5b1\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-6bbtn" Dec 08 17:48:09 crc kubenswrapper[5118]: I1208 17:48:09.626654 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c3f09b88-c9bd-4d0b-9a10-2b2b5f2ea5b1-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-6bbtn\" (UID: \"c3f09b88-c9bd-4d0b-9a10-2b2b5f2ea5b1\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-6bbtn" Dec 08 17:48:09 crc kubenswrapper[5118]: I1208 17:48:09.648336 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-46pz5\" (UniqueName: \"kubernetes.io/projected/c3f09b88-c9bd-4d0b-9a10-2b2b5f2ea5b1-kube-api-access-46pz5\") pod \"marketplace-operator-547dbd544d-6bbtn\" (UID: \"c3f09b88-c9bd-4d0b-9a10-2b2b5f2ea5b1\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-6bbtn" Dec 08 17:48:09 crc kubenswrapper[5118]: I1208 17:48:09.648418 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c3f09b88-c9bd-4d0b-9a10-2b2b5f2ea5b1-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-6bbtn\" (UID: \"c3f09b88-c9bd-4d0b-9a10-2b2b5f2ea5b1\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-6bbtn" Dec 08 17:48:09 crc kubenswrapper[5118]: I1208 17:48:09.789081 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-6bbtn" Dec 08 17:48:09 crc kubenswrapper[5118]: I1208 17:48:09.802655 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lxwl6" Dec 08 17:48:09 crc kubenswrapper[5118]: I1208 17:48:09.807283 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r22jf" Dec 08 17:48:09 crc kubenswrapper[5118]: I1208 17:48:09.814574 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rvglb" Dec 08 17:48:09 crc kubenswrapper[5118]: I1208 17:48:09.864912 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-85wdh" Dec 08 17:48:09 crc kubenswrapper[5118]: I1208 17:48:09.913510 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zfv6j" Dec 08 17:48:09 crc kubenswrapper[5118]: I1208 17:48:09.933289 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe8486ce-b0ff-43e5-b2a4-3e1d81feeebf-utilities\") pod \"fe8486ce-b0ff-43e5-b2a4-3e1d81feeebf\" (UID: \"fe8486ce-b0ff-43e5-b2a4-3e1d81feeebf\") " Dec 08 17:48:09 crc kubenswrapper[5118]: I1208 17:48:09.933366 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9af82654-06bc-4376-bff5-d6adacce9785-marketplace-trusted-ca\") pod \"9af82654-06bc-4376-bff5-d6adacce9785\" (UID: \"9af82654-06bc-4376-bff5-d6adacce9785\") " Dec 08 17:48:09 crc kubenswrapper[5118]: I1208 17:48:09.933392 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe467668-8954-4465-87ca-ef1d5f933d43-utilities\") pod \"fe467668-8954-4465-87ca-ef1d5f933d43\" (UID: \"fe467668-8954-4465-87ca-ef1d5f933d43\") " Dec 08 17:48:09 crc kubenswrapper[5118]: I1208 17:48:09.933420 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v6xtn\" (UniqueName: \"kubernetes.io/projected/fe467668-8954-4465-87ca-ef1d5f933d43-kube-api-access-v6xtn\") pod \"fe467668-8954-4465-87ca-ef1d5f933d43\" (UID: \"fe467668-8954-4465-87ca-ef1d5f933d43\") " Dec 08 17:48:09 crc kubenswrapper[5118]: I1208 17:48:09.933450 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k4p2p\" (UniqueName: \"kubernetes.io/projected/cb8303fe-2019-44f4-a124-af174b28cc02-kube-api-access-k4p2p\") pod \"cb8303fe-2019-44f4-a124-af174b28cc02\" (UID: \"cb8303fe-2019-44f4-a124-af174b28cc02\") " Dec 08 17:48:09 crc kubenswrapper[5118]: I1208 17:48:09.935226 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe8486ce-b0ff-43e5-b2a4-3e1d81feeebf-utilities" (OuterVolumeSpecName: "utilities") pod "fe8486ce-b0ff-43e5-b2a4-3e1d81feeebf" (UID: "fe8486ce-b0ff-43e5-b2a4-3e1d81feeebf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:48:09 crc kubenswrapper[5118]: I1208 17:48:09.935860 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9af82654-06bc-4376-bff5-d6adacce9785-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "9af82654-06bc-4376-bff5-d6adacce9785" (UID: "9af82654-06bc-4376-bff5-d6adacce9785"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:48:09 crc kubenswrapper[5118]: I1208 17:48:09.936796 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb8303fe-2019-44f4-a124-af174b28cc02-catalog-content\") pod \"cb8303fe-2019-44f4-a124-af174b28cc02\" (UID: \"cb8303fe-2019-44f4-a124-af174b28cc02\") " Dec 08 17:48:09 crc kubenswrapper[5118]: I1208 17:48:09.937013 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe467668-8954-4465-87ca-ef1d5f933d43-utilities" (OuterVolumeSpecName: "utilities") pod "fe467668-8954-4465-87ca-ef1d5f933d43" (UID: "fe467668-8954-4465-87ca-ef1d5f933d43"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:48:09 crc kubenswrapper[5118]: I1208 17:48:09.937103 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ht7kr\" (UniqueName: \"kubernetes.io/projected/fe8486ce-b0ff-43e5-b2a4-3e1d81feeebf-kube-api-access-ht7kr\") pod \"fe8486ce-b0ff-43e5-b2a4-3e1d81feeebf\" (UID: \"fe8486ce-b0ff-43e5-b2a4-3e1d81feeebf\") " Dec 08 17:48:09 crc kubenswrapper[5118]: I1208 17:48:09.937180 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2mm8b\" (UniqueName: \"kubernetes.io/projected/9af82654-06bc-4376-bff5-d6adacce9785-kube-api-access-2mm8b\") pod \"9af82654-06bc-4376-bff5-d6adacce9785\" (UID: \"9af82654-06bc-4376-bff5-d6adacce9785\") " Dec 08 17:48:09 crc kubenswrapper[5118]: I1208 17:48:09.937229 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/9af82654-06bc-4376-bff5-d6adacce9785-marketplace-operator-metrics\") pod \"9af82654-06bc-4376-bff5-d6adacce9785\" (UID: \"9af82654-06bc-4376-bff5-d6adacce9785\") " Dec 08 17:48:09 crc kubenswrapper[5118]: I1208 17:48:09.937274 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe467668-8954-4465-87ca-ef1d5f933d43-catalog-content\") pod \"fe467668-8954-4465-87ca-ef1d5f933d43\" (UID: \"fe467668-8954-4465-87ca-ef1d5f933d43\") " Dec 08 17:48:09 crc kubenswrapper[5118]: I1208 17:48:09.937306 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/9af82654-06bc-4376-bff5-d6adacce9785-tmp\") pod \"9af82654-06bc-4376-bff5-d6adacce9785\" (UID: \"9af82654-06bc-4376-bff5-d6adacce9785\") " Dec 08 17:48:09 crc kubenswrapper[5118]: I1208 17:48:09.937376 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe8486ce-b0ff-43e5-b2a4-3e1d81feeebf-catalog-content\") pod \"fe8486ce-b0ff-43e5-b2a4-3e1d81feeebf\" (UID: \"fe8486ce-b0ff-43e5-b2a4-3e1d81feeebf\") " Dec 08 17:48:09 crc kubenswrapper[5118]: I1208 17:48:09.937404 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb8303fe-2019-44f4-a124-af174b28cc02-utilities\") pod \"cb8303fe-2019-44f4-a124-af174b28cc02\" (UID: \"cb8303fe-2019-44f4-a124-af174b28cc02\") " Dec 08 17:48:09 crc kubenswrapper[5118]: I1208 17:48:09.937678 5118 reconciler_common.go:299] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9af82654-06bc-4376-bff5-d6adacce9785-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 08 17:48:09 crc kubenswrapper[5118]: I1208 17:48:09.937692 5118 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe467668-8954-4465-87ca-ef1d5f933d43-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 17:48:09 crc kubenswrapper[5118]: I1208 17:48:09.937701 5118 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe8486ce-b0ff-43e5-b2a4-3e1d81feeebf-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 17:48:09 crc kubenswrapper[5118]: I1208 17:48:09.942566 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe467668-8954-4465-87ca-ef1d5f933d43-kube-api-access-v6xtn" (OuterVolumeSpecName: "kube-api-access-v6xtn") pod "fe467668-8954-4465-87ca-ef1d5f933d43" (UID: "fe467668-8954-4465-87ca-ef1d5f933d43"). InnerVolumeSpecName "kube-api-access-v6xtn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:48:09 crc kubenswrapper[5118]: I1208 17:48:09.942703 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb8303fe-2019-44f4-a124-af174b28cc02-utilities" (OuterVolumeSpecName: "utilities") pod "cb8303fe-2019-44f4-a124-af174b28cc02" (UID: "cb8303fe-2019-44f4-a124-af174b28cc02"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:48:09 crc kubenswrapper[5118]: I1208 17:48:09.946112 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9af82654-06bc-4376-bff5-d6adacce9785-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "9af82654-06bc-4376-bff5-d6adacce9785" (UID: "9af82654-06bc-4376-bff5-d6adacce9785"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:48:09 crc kubenswrapper[5118]: I1208 17:48:09.947716 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9af82654-06bc-4376-bff5-d6adacce9785-kube-api-access-2mm8b" (OuterVolumeSpecName: "kube-api-access-2mm8b") pod "9af82654-06bc-4376-bff5-d6adacce9785" (UID: "9af82654-06bc-4376-bff5-d6adacce9785"). InnerVolumeSpecName "kube-api-access-2mm8b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:48:09 crc kubenswrapper[5118]: I1208 17:48:09.948362 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe8486ce-b0ff-43e5-b2a4-3e1d81feeebf-kube-api-access-ht7kr" (OuterVolumeSpecName: "kube-api-access-ht7kr") pod "fe8486ce-b0ff-43e5-b2a4-3e1d81feeebf" (UID: "fe8486ce-b0ff-43e5-b2a4-3e1d81feeebf"). InnerVolumeSpecName "kube-api-access-ht7kr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:48:09 crc kubenswrapper[5118]: I1208 17:48:09.950626 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9af82654-06bc-4376-bff5-d6adacce9785-tmp" (OuterVolumeSpecName: "tmp") pod "9af82654-06bc-4376-bff5-d6adacce9785" (UID: "9af82654-06bc-4376-bff5-d6adacce9785"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:48:09 crc kubenswrapper[5118]: I1208 17:48:09.952594 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe467668-8954-4465-87ca-ef1d5f933d43-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fe467668-8954-4465-87ca-ef1d5f933d43" (UID: "fe467668-8954-4465-87ca-ef1d5f933d43"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:48:09 crc kubenswrapper[5118]: I1208 17:48:09.952669 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb8303fe-2019-44f4-a124-af174b28cc02-kube-api-access-k4p2p" (OuterVolumeSpecName: "kube-api-access-k4p2p") pod "cb8303fe-2019-44f4-a124-af174b28cc02" (UID: "cb8303fe-2019-44f4-a124-af174b28cc02"). InnerVolumeSpecName "kube-api-access-k4p2p". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:48:09 crc kubenswrapper[5118]: I1208 17:48:09.980310 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe8486ce-b0ff-43e5-b2a4-3e1d81feeebf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fe8486ce-b0ff-43e5-b2a4-3e1d81feeebf" (UID: "fe8486ce-b0ff-43e5-b2a4-3e1d81feeebf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:48:09 crc kubenswrapper[5118]: I1208 17:48:09.996398 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb8303fe-2019-44f4-a124-af174b28cc02-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cb8303fe-2019-44f4-a124-af174b28cc02" (UID: "cb8303fe-2019-44f4-a124-af174b28cc02"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:48:10 crc kubenswrapper[5118]: I1208 17:48:10.038712 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2c92d64-3525-4675-bbe9-38bfe6dd4504-utilities\") pod \"e2c92d64-3525-4675-bbe9-38bfe6dd4504\" (UID: \"e2c92d64-3525-4675-bbe9-38bfe6dd4504\") " Dec 08 17:48:10 crc kubenswrapper[5118]: I1208 17:48:10.038808 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nr9nr\" (UniqueName: \"kubernetes.io/projected/e2c92d64-3525-4675-bbe9-38bfe6dd4504-kube-api-access-nr9nr\") pod \"e2c92d64-3525-4675-bbe9-38bfe6dd4504\" (UID: \"e2c92d64-3525-4675-bbe9-38bfe6dd4504\") " Dec 08 17:48:10 crc kubenswrapper[5118]: I1208 17:48:10.038833 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2c92d64-3525-4675-bbe9-38bfe6dd4504-catalog-content\") pod \"e2c92d64-3525-4675-bbe9-38bfe6dd4504\" (UID: \"e2c92d64-3525-4675-bbe9-38bfe6dd4504\") " Dec 08 17:48:10 crc kubenswrapper[5118]: I1208 17:48:10.039034 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-v6xtn\" (UniqueName: \"kubernetes.io/projected/fe467668-8954-4465-87ca-ef1d5f933d43-kube-api-access-v6xtn\") on node \"crc\" DevicePath \"\"" Dec 08 17:48:10 crc kubenswrapper[5118]: I1208 17:48:10.039046 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-k4p2p\" (UniqueName: \"kubernetes.io/projected/cb8303fe-2019-44f4-a124-af174b28cc02-kube-api-access-k4p2p\") on node \"crc\" DevicePath \"\"" Dec 08 17:48:10 crc kubenswrapper[5118]: I1208 17:48:10.039055 5118 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb8303fe-2019-44f4-a124-af174b28cc02-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 17:48:10 crc kubenswrapper[5118]: I1208 17:48:10.039063 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ht7kr\" (UniqueName: \"kubernetes.io/projected/fe8486ce-b0ff-43e5-b2a4-3e1d81feeebf-kube-api-access-ht7kr\") on node \"crc\" DevicePath \"\"" Dec 08 17:48:10 crc kubenswrapper[5118]: I1208 17:48:10.039072 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2mm8b\" (UniqueName: \"kubernetes.io/projected/9af82654-06bc-4376-bff5-d6adacce9785-kube-api-access-2mm8b\") on node \"crc\" DevicePath \"\"" Dec 08 17:48:10 crc kubenswrapper[5118]: I1208 17:48:10.039081 5118 reconciler_common.go:299] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/9af82654-06bc-4376-bff5-d6adacce9785-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 08 17:48:10 crc kubenswrapper[5118]: I1208 17:48:10.039090 5118 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe467668-8954-4465-87ca-ef1d5f933d43-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 17:48:10 crc kubenswrapper[5118]: I1208 17:48:10.039099 5118 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/9af82654-06bc-4376-bff5-d6adacce9785-tmp\") on node \"crc\" DevicePath \"\"" Dec 08 17:48:10 crc kubenswrapper[5118]: I1208 17:48:10.039110 5118 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe8486ce-b0ff-43e5-b2a4-3e1d81feeebf-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 17:48:10 crc kubenswrapper[5118]: I1208 17:48:10.039119 5118 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb8303fe-2019-44f4-a124-af174b28cc02-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 17:48:10 crc kubenswrapper[5118]: I1208 17:48:10.040133 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e2c92d64-3525-4675-bbe9-38bfe6dd4504-utilities" (OuterVolumeSpecName: "utilities") pod "e2c92d64-3525-4675-bbe9-38bfe6dd4504" (UID: "e2c92d64-3525-4675-bbe9-38bfe6dd4504"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:48:10 crc kubenswrapper[5118]: I1208 17:48:10.042354 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2c92d64-3525-4675-bbe9-38bfe6dd4504-kube-api-access-nr9nr" (OuterVolumeSpecName: "kube-api-access-nr9nr") pod "e2c92d64-3525-4675-bbe9-38bfe6dd4504" (UID: "e2c92d64-3525-4675-bbe9-38bfe6dd4504"). InnerVolumeSpecName "kube-api-access-nr9nr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:48:10 crc kubenswrapper[5118]: I1208 17:48:10.127592 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e2c92d64-3525-4675-bbe9-38bfe6dd4504-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e2c92d64-3525-4675-bbe9-38bfe6dd4504" (UID: "e2c92d64-3525-4675-bbe9-38bfe6dd4504"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:48:10 crc kubenswrapper[5118]: I1208 17:48:10.140407 5118 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2c92d64-3525-4675-bbe9-38bfe6dd4504-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 17:48:10 crc kubenswrapper[5118]: I1208 17:48:10.140446 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nr9nr\" (UniqueName: \"kubernetes.io/projected/e2c92d64-3525-4675-bbe9-38bfe6dd4504-kube-api-access-nr9nr\") on node \"crc\" DevicePath \"\"" Dec 08 17:48:10 crc kubenswrapper[5118]: I1208 17:48:10.140459 5118 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2c92d64-3525-4675-bbe9-38bfe6dd4504-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 17:48:10 crc kubenswrapper[5118]: I1208 17:48:10.198308 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-6bbtn"] Dec 08 17:48:10 crc kubenswrapper[5118]: I1208 17:48:10.520792 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r22jf" event={"ID":"cb8303fe-2019-44f4-a124-af174b28cc02","Type":"ContainerDied","Data":"ad18d5cd7629954fa392ecb15a908995ca8664574c76fffd455810a5c533b257"} Dec 08 17:48:10 crc kubenswrapper[5118]: I1208 17:48:10.521216 5118 scope.go:117] "RemoveContainer" containerID="19904e29b569326116dde0334fcdcfeccc18c7658bc563e099efdbf4a5c0fe55" Dec 08 17:48:10 crc kubenswrapper[5118]: I1208 17:48:10.520832 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r22jf" Dec 08 17:48:10 crc kubenswrapper[5118]: I1208 17:48:10.522596 5118 generic.go:358] "Generic (PLEG): container finished" podID="9af82654-06bc-4376-bff5-d6adacce9785" containerID="722c18c7204724e7ab57760c6f7523235484ab7bf0593e7f00c4d92b2730cc13" exitCode=0 Dec 08 17:48:10 crc kubenswrapper[5118]: I1208 17:48:10.522679 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-85wdh" event={"ID":"9af82654-06bc-4376-bff5-d6adacce9785","Type":"ContainerDied","Data":"722c18c7204724e7ab57760c6f7523235484ab7bf0593e7f00c4d92b2730cc13"} Dec 08 17:48:10 crc kubenswrapper[5118]: I1208 17:48:10.522699 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-85wdh" event={"ID":"9af82654-06bc-4376-bff5-d6adacce9785","Type":"ContainerDied","Data":"3635ccac4190e9ac4d7e71077ab9092bae6db0a6613f789211d0b6f919a4a49e"} Dec 08 17:48:10 crc kubenswrapper[5118]: I1208 17:48:10.522855 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-85wdh" Dec 08 17:48:10 crc kubenswrapper[5118]: I1208 17:48:10.526913 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lxwl6" event={"ID":"fe8486ce-b0ff-43e5-b2a4-3e1d81feeebf","Type":"ContainerDied","Data":"08a30309aab05f724b4d90b3610f7ad6b5bae8633f9e5f0e956fb4a55ca08d5c"} Dec 08 17:48:10 crc kubenswrapper[5118]: I1208 17:48:10.526968 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lxwl6" Dec 08 17:48:10 crc kubenswrapper[5118]: I1208 17:48:10.536079 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-6bbtn" event={"ID":"c3f09b88-c9bd-4d0b-9a10-2b2b5f2ea5b1","Type":"ContainerStarted","Data":"b3fbcf674e31285049a7caebf1f010017c82aef466fc5c4a4d9865fa1397cfdb"} Dec 08 17:48:10 crc kubenswrapper[5118]: I1208 17:48:10.536131 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-6bbtn" event={"ID":"c3f09b88-c9bd-4d0b-9a10-2b2b5f2ea5b1","Type":"ContainerStarted","Data":"023ae68f913e3364024f45b9ef33e12f3f6e2e37229c46bb927a5eb359d414c6"} Dec 08 17:48:10 crc kubenswrapper[5118]: I1208 17:48:10.536630 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-6bbtn" Dec 08 17:48:10 crc kubenswrapper[5118]: I1208 17:48:10.538090 5118 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-6bbtn container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.64:8080/healthz\": dial tcp 10.217.0.64:8080: connect: connection refused" start-of-body= Dec 08 17:48:10 crc kubenswrapper[5118]: I1208 17:48:10.538152 5118 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-547dbd544d-6bbtn" podUID="c3f09b88-c9bd-4d0b-9a10-2b2b5f2ea5b1" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.64:8080/healthz\": dial tcp 10.217.0.64:8080: connect: connection refused" Dec 08 17:48:10 crc kubenswrapper[5118]: I1208 17:48:10.540134 5118 generic.go:358] "Generic (PLEG): container finished" podID="fe467668-8954-4465-87ca-ef1d5f933d43" containerID="784aacda702054432bdfdb7856bc84d016576f8ddcb91453c196cae71ca3641e" exitCode=0 Dec 08 17:48:10 crc kubenswrapper[5118]: I1208 17:48:10.540246 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rvglb" Dec 08 17:48:10 crc kubenswrapper[5118]: I1208 17:48:10.540290 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rvglb" event={"ID":"fe467668-8954-4465-87ca-ef1d5f933d43","Type":"ContainerDied","Data":"784aacda702054432bdfdb7856bc84d016576f8ddcb91453c196cae71ca3641e"} Dec 08 17:48:10 crc kubenswrapper[5118]: I1208 17:48:10.540315 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rvglb" event={"ID":"fe467668-8954-4465-87ca-ef1d5f933d43","Type":"ContainerDied","Data":"300fcfe62cb2a2236d7576185a01858472eaf4d7b3901f788ba4cb7d1721d434"} Dec 08 17:48:10 crc kubenswrapper[5118]: I1208 17:48:10.547116 5118 generic.go:358] "Generic (PLEG): container finished" podID="e2c92d64-3525-4675-bbe9-38bfe6dd4504" containerID="ef908685a3f65f3c967fd946e3c2b26bf81e80fe6f4bd7d21c7d02fcfaaf1408" exitCode=0 Dec 08 17:48:10 crc kubenswrapper[5118]: I1208 17:48:10.547194 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zfv6j" event={"ID":"e2c92d64-3525-4675-bbe9-38bfe6dd4504","Type":"ContainerDied","Data":"ef908685a3f65f3c967fd946e3c2b26bf81e80fe6f4bd7d21c7d02fcfaaf1408"} Dec 08 17:48:10 crc kubenswrapper[5118]: I1208 17:48:10.547220 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zfv6j" event={"ID":"e2c92d64-3525-4675-bbe9-38bfe6dd4504","Type":"ContainerDied","Data":"bad7cc15753758580e7b5d15966ebb1082d0a9a66fb5c9a65077ce2b2db411b6"} Dec 08 17:48:10 crc kubenswrapper[5118]: I1208 17:48:10.547342 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zfv6j" Dec 08 17:48:10 crc kubenswrapper[5118]: I1208 17:48:10.568858 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-547dbd544d-6bbtn" podStartSLOduration=1.568838635 podStartE2EDuration="1.568838635s" podCreationTimestamp="2025-12-08 17:48:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:48:10.563105882 +0000 UTC m=+347.464429976" watchObservedRunningTime="2025-12-08 17:48:10.568838635 +0000 UTC m=+347.470162739" Dec 08 17:48:10 crc kubenswrapper[5118]: I1208 17:48:10.578242 5118 scope.go:117] "RemoveContainer" containerID="f6f2c5311c5f7c1b47e813d9109bbea34736ca2dcab8da2e32723d45e87698f1" Dec 08 17:48:10 crc kubenswrapper[5118]: I1208 17:48:10.583616 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-r22jf"] Dec 08 17:48:10 crc kubenswrapper[5118]: I1208 17:48:10.591342 5118 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-r22jf"] Dec 08 17:48:10 crc kubenswrapper[5118]: I1208 17:48:10.614836 5118 scope.go:117] "RemoveContainer" containerID="9c387ba0d120976d9e95c5e677248d0f768c20788e2e7bb48afee673c174f607" Dec 08 17:48:10 crc kubenswrapper[5118]: I1208 17:48:10.616225 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lxwl6"] Dec 08 17:48:10 crc kubenswrapper[5118]: I1208 17:48:10.619645 5118 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-lxwl6"] Dec 08 17:48:10 crc kubenswrapper[5118]: I1208 17:48:10.623575 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-85wdh"] Dec 08 17:48:10 crc kubenswrapper[5118]: I1208 17:48:10.630036 5118 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-85wdh"] Dec 08 17:48:10 crc kubenswrapper[5118]: I1208 17:48:10.635279 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rvglb"] Dec 08 17:48:10 crc kubenswrapper[5118]: I1208 17:48:10.638271 5118 scope.go:117] "RemoveContainer" containerID="722c18c7204724e7ab57760c6f7523235484ab7bf0593e7f00c4d92b2730cc13" Dec 08 17:48:10 crc kubenswrapper[5118]: I1208 17:48:10.638299 5118 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rvglb"] Dec 08 17:48:10 crc kubenswrapper[5118]: I1208 17:48:10.644057 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zfv6j"] Dec 08 17:48:10 crc kubenswrapper[5118]: I1208 17:48:10.646032 5118 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-zfv6j"] Dec 08 17:48:10 crc kubenswrapper[5118]: I1208 17:48:10.654525 5118 scope.go:117] "RemoveContainer" containerID="79455a2a0ec6c3aa629647780ba144ff7b1a2c579b6813f48db3d05f01da840e" Dec 08 17:48:10 crc kubenswrapper[5118]: I1208 17:48:10.669464 5118 scope.go:117] "RemoveContainer" containerID="722c18c7204724e7ab57760c6f7523235484ab7bf0593e7f00c4d92b2730cc13" Dec 08 17:48:10 crc kubenswrapper[5118]: E1208 17:48:10.669922 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"722c18c7204724e7ab57760c6f7523235484ab7bf0593e7f00c4d92b2730cc13\": container with ID starting with 722c18c7204724e7ab57760c6f7523235484ab7bf0593e7f00c4d92b2730cc13 not found: ID does not exist" containerID="722c18c7204724e7ab57760c6f7523235484ab7bf0593e7f00c4d92b2730cc13" Dec 08 17:48:10 crc kubenswrapper[5118]: I1208 17:48:10.669957 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"722c18c7204724e7ab57760c6f7523235484ab7bf0593e7f00c4d92b2730cc13"} err="failed to get container status \"722c18c7204724e7ab57760c6f7523235484ab7bf0593e7f00c4d92b2730cc13\": rpc error: code = NotFound desc = could not find container \"722c18c7204724e7ab57760c6f7523235484ab7bf0593e7f00c4d92b2730cc13\": container with ID starting with 722c18c7204724e7ab57760c6f7523235484ab7bf0593e7f00c4d92b2730cc13 not found: ID does not exist" Dec 08 17:48:10 crc kubenswrapper[5118]: I1208 17:48:10.669982 5118 scope.go:117] "RemoveContainer" containerID="79455a2a0ec6c3aa629647780ba144ff7b1a2c579b6813f48db3d05f01da840e" Dec 08 17:48:10 crc kubenswrapper[5118]: E1208 17:48:10.670287 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79455a2a0ec6c3aa629647780ba144ff7b1a2c579b6813f48db3d05f01da840e\": container with ID starting with 79455a2a0ec6c3aa629647780ba144ff7b1a2c579b6813f48db3d05f01da840e not found: ID does not exist" containerID="79455a2a0ec6c3aa629647780ba144ff7b1a2c579b6813f48db3d05f01da840e" Dec 08 17:48:10 crc kubenswrapper[5118]: I1208 17:48:10.670343 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79455a2a0ec6c3aa629647780ba144ff7b1a2c579b6813f48db3d05f01da840e"} err="failed to get container status \"79455a2a0ec6c3aa629647780ba144ff7b1a2c579b6813f48db3d05f01da840e\": rpc error: code = NotFound desc = could not find container \"79455a2a0ec6c3aa629647780ba144ff7b1a2c579b6813f48db3d05f01da840e\": container with ID starting with 79455a2a0ec6c3aa629647780ba144ff7b1a2c579b6813f48db3d05f01da840e not found: ID does not exist" Dec 08 17:48:10 crc kubenswrapper[5118]: I1208 17:48:10.670364 5118 scope.go:117] "RemoveContainer" containerID="a178a9a0cec87f28ab4326f087a97ff391fb09d7bf9e2b5a008129f49bf869d3" Dec 08 17:48:10 crc kubenswrapper[5118]: I1208 17:48:10.684158 5118 scope.go:117] "RemoveContainer" containerID="bfcd02986882237453589d99ce15d916e7b8cb95a5ed00570d6c66ff9c01fd58" Dec 08 17:48:10 crc kubenswrapper[5118]: I1208 17:48:10.697323 5118 scope.go:117] "RemoveContainer" containerID="7d5a57573a287b700fca389071c6e934a33bbae0a14200922458d1c8c760f5b8" Dec 08 17:48:10 crc kubenswrapper[5118]: I1208 17:48:10.714471 5118 scope.go:117] "RemoveContainer" containerID="784aacda702054432bdfdb7856bc84d016576f8ddcb91453c196cae71ca3641e" Dec 08 17:48:10 crc kubenswrapper[5118]: I1208 17:48:10.726463 5118 scope.go:117] "RemoveContainer" containerID="50a485ccd5872019fccd3a3aa2c2e0e5a919d6c131eccce9de8984195ed1c610" Dec 08 17:48:10 crc kubenswrapper[5118]: I1208 17:48:10.738333 5118 scope.go:117] "RemoveContainer" containerID="894a57bd149b073f0fef79ee9ec030db9a8c6c0ce09c4dcb3b54776522864d14" Dec 08 17:48:10 crc kubenswrapper[5118]: I1208 17:48:10.750660 5118 scope.go:117] "RemoveContainer" containerID="784aacda702054432bdfdb7856bc84d016576f8ddcb91453c196cae71ca3641e" Dec 08 17:48:10 crc kubenswrapper[5118]: E1208 17:48:10.751039 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"784aacda702054432bdfdb7856bc84d016576f8ddcb91453c196cae71ca3641e\": container with ID starting with 784aacda702054432bdfdb7856bc84d016576f8ddcb91453c196cae71ca3641e not found: ID does not exist" containerID="784aacda702054432bdfdb7856bc84d016576f8ddcb91453c196cae71ca3641e" Dec 08 17:48:10 crc kubenswrapper[5118]: I1208 17:48:10.751085 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"784aacda702054432bdfdb7856bc84d016576f8ddcb91453c196cae71ca3641e"} err="failed to get container status \"784aacda702054432bdfdb7856bc84d016576f8ddcb91453c196cae71ca3641e\": rpc error: code = NotFound desc = could not find container \"784aacda702054432bdfdb7856bc84d016576f8ddcb91453c196cae71ca3641e\": container with ID starting with 784aacda702054432bdfdb7856bc84d016576f8ddcb91453c196cae71ca3641e not found: ID does not exist" Dec 08 17:48:10 crc kubenswrapper[5118]: I1208 17:48:10.751120 5118 scope.go:117] "RemoveContainer" containerID="50a485ccd5872019fccd3a3aa2c2e0e5a919d6c131eccce9de8984195ed1c610" Dec 08 17:48:10 crc kubenswrapper[5118]: E1208 17:48:10.751468 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"50a485ccd5872019fccd3a3aa2c2e0e5a919d6c131eccce9de8984195ed1c610\": container with ID starting with 50a485ccd5872019fccd3a3aa2c2e0e5a919d6c131eccce9de8984195ed1c610 not found: ID does not exist" containerID="50a485ccd5872019fccd3a3aa2c2e0e5a919d6c131eccce9de8984195ed1c610" Dec 08 17:48:10 crc kubenswrapper[5118]: I1208 17:48:10.751504 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50a485ccd5872019fccd3a3aa2c2e0e5a919d6c131eccce9de8984195ed1c610"} err="failed to get container status \"50a485ccd5872019fccd3a3aa2c2e0e5a919d6c131eccce9de8984195ed1c610\": rpc error: code = NotFound desc = could not find container \"50a485ccd5872019fccd3a3aa2c2e0e5a919d6c131eccce9de8984195ed1c610\": container with ID starting with 50a485ccd5872019fccd3a3aa2c2e0e5a919d6c131eccce9de8984195ed1c610 not found: ID does not exist" Dec 08 17:48:10 crc kubenswrapper[5118]: I1208 17:48:10.751523 5118 scope.go:117] "RemoveContainer" containerID="894a57bd149b073f0fef79ee9ec030db9a8c6c0ce09c4dcb3b54776522864d14" Dec 08 17:48:10 crc kubenswrapper[5118]: E1208 17:48:10.751726 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"894a57bd149b073f0fef79ee9ec030db9a8c6c0ce09c4dcb3b54776522864d14\": container with ID starting with 894a57bd149b073f0fef79ee9ec030db9a8c6c0ce09c4dcb3b54776522864d14 not found: ID does not exist" containerID="894a57bd149b073f0fef79ee9ec030db9a8c6c0ce09c4dcb3b54776522864d14" Dec 08 17:48:10 crc kubenswrapper[5118]: I1208 17:48:10.751773 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"894a57bd149b073f0fef79ee9ec030db9a8c6c0ce09c4dcb3b54776522864d14"} err="failed to get container status \"894a57bd149b073f0fef79ee9ec030db9a8c6c0ce09c4dcb3b54776522864d14\": rpc error: code = NotFound desc = could not find container \"894a57bd149b073f0fef79ee9ec030db9a8c6c0ce09c4dcb3b54776522864d14\": container with ID starting with 894a57bd149b073f0fef79ee9ec030db9a8c6c0ce09c4dcb3b54776522864d14 not found: ID does not exist" Dec 08 17:48:10 crc kubenswrapper[5118]: I1208 17:48:10.751794 5118 scope.go:117] "RemoveContainer" containerID="ef908685a3f65f3c967fd946e3c2b26bf81e80fe6f4bd7d21c7d02fcfaaf1408" Dec 08 17:48:10 crc kubenswrapper[5118]: I1208 17:48:10.765319 5118 scope.go:117] "RemoveContainer" containerID="88a18d41d676864b0bf1547cd9ab99433a44c623365c7c3e11a312a63eb01d5f" Dec 08 17:48:10 crc kubenswrapper[5118]: I1208 17:48:10.781029 5118 scope.go:117] "RemoveContainer" containerID="920e7485278ee475bff410e78c49fb30248bc283ba910377451d2fc403b1dc85" Dec 08 17:48:10 crc kubenswrapper[5118]: I1208 17:48:10.800137 5118 scope.go:117] "RemoveContainer" containerID="ef908685a3f65f3c967fd946e3c2b26bf81e80fe6f4bd7d21c7d02fcfaaf1408" Dec 08 17:48:10 crc kubenswrapper[5118]: E1208 17:48:10.800609 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef908685a3f65f3c967fd946e3c2b26bf81e80fe6f4bd7d21c7d02fcfaaf1408\": container with ID starting with ef908685a3f65f3c967fd946e3c2b26bf81e80fe6f4bd7d21c7d02fcfaaf1408 not found: ID does not exist" containerID="ef908685a3f65f3c967fd946e3c2b26bf81e80fe6f4bd7d21c7d02fcfaaf1408" Dec 08 17:48:10 crc kubenswrapper[5118]: I1208 17:48:10.800654 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef908685a3f65f3c967fd946e3c2b26bf81e80fe6f4bd7d21c7d02fcfaaf1408"} err="failed to get container status \"ef908685a3f65f3c967fd946e3c2b26bf81e80fe6f4bd7d21c7d02fcfaaf1408\": rpc error: code = NotFound desc = could not find container \"ef908685a3f65f3c967fd946e3c2b26bf81e80fe6f4bd7d21c7d02fcfaaf1408\": container with ID starting with ef908685a3f65f3c967fd946e3c2b26bf81e80fe6f4bd7d21c7d02fcfaaf1408 not found: ID does not exist" Dec 08 17:48:10 crc kubenswrapper[5118]: I1208 17:48:10.800685 5118 scope.go:117] "RemoveContainer" containerID="88a18d41d676864b0bf1547cd9ab99433a44c623365c7c3e11a312a63eb01d5f" Dec 08 17:48:10 crc kubenswrapper[5118]: E1208 17:48:10.801095 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88a18d41d676864b0bf1547cd9ab99433a44c623365c7c3e11a312a63eb01d5f\": container with ID starting with 88a18d41d676864b0bf1547cd9ab99433a44c623365c7c3e11a312a63eb01d5f not found: ID does not exist" containerID="88a18d41d676864b0bf1547cd9ab99433a44c623365c7c3e11a312a63eb01d5f" Dec 08 17:48:10 crc kubenswrapper[5118]: I1208 17:48:10.801124 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88a18d41d676864b0bf1547cd9ab99433a44c623365c7c3e11a312a63eb01d5f"} err="failed to get container status \"88a18d41d676864b0bf1547cd9ab99433a44c623365c7c3e11a312a63eb01d5f\": rpc error: code = NotFound desc = could not find container \"88a18d41d676864b0bf1547cd9ab99433a44c623365c7c3e11a312a63eb01d5f\": container with ID starting with 88a18d41d676864b0bf1547cd9ab99433a44c623365c7c3e11a312a63eb01d5f not found: ID does not exist" Dec 08 17:48:10 crc kubenswrapper[5118]: I1208 17:48:10.801144 5118 scope.go:117] "RemoveContainer" containerID="920e7485278ee475bff410e78c49fb30248bc283ba910377451d2fc403b1dc85" Dec 08 17:48:10 crc kubenswrapper[5118]: E1208 17:48:10.801414 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"920e7485278ee475bff410e78c49fb30248bc283ba910377451d2fc403b1dc85\": container with ID starting with 920e7485278ee475bff410e78c49fb30248bc283ba910377451d2fc403b1dc85 not found: ID does not exist" containerID="920e7485278ee475bff410e78c49fb30248bc283ba910377451d2fc403b1dc85" Dec 08 17:48:10 crc kubenswrapper[5118]: I1208 17:48:10.801464 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"920e7485278ee475bff410e78c49fb30248bc283ba910377451d2fc403b1dc85"} err="failed to get container status \"920e7485278ee475bff410e78c49fb30248bc283ba910377451d2fc403b1dc85\": rpc error: code = NotFound desc = could not find container \"920e7485278ee475bff410e78c49fb30248bc283ba910377451d2fc403b1dc85\": container with ID starting with 920e7485278ee475bff410e78c49fb30248bc283ba910377451d2fc403b1dc85 not found: ID does not exist" Dec 08 17:48:11 crc kubenswrapper[5118]: I1208 17:48:11.166711 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-58d6l"] Dec 08 17:48:11 crc kubenswrapper[5118]: I1208 17:48:11.168388 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="cb8303fe-2019-44f4-a124-af174b28cc02" containerName="extract-utilities" Dec 08 17:48:11 crc kubenswrapper[5118]: I1208 17:48:11.168434 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb8303fe-2019-44f4-a124-af174b28cc02" containerName="extract-utilities" Dec 08 17:48:11 crc kubenswrapper[5118]: I1208 17:48:11.168487 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="cb8303fe-2019-44f4-a124-af174b28cc02" containerName="extract-content" Dec 08 17:48:11 crc kubenswrapper[5118]: I1208 17:48:11.168505 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb8303fe-2019-44f4-a124-af174b28cc02" containerName="extract-content" Dec 08 17:48:11 crc kubenswrapper[5118]: I1208 17:48:11.168535 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="fe467668-8954-4465-87ca-ef1d5f933d43" containerName="registry-server" Dec 08 17:48:11 crc kubenswrapper[5118]: I1208 17:48:11.168551 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe467668-8954-4465-87ca-ef1d5f933d43" containerName="registry-server" Dec 08 17:48:11 crc kubenswrapper[5118]: I1208 17:48:11.168578 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e2c92d64-3525-4675-bbe9-38bfe6dd4504" containerName="extract-utilities" Dec 08 17:48:11 crc kubenswrapper[5118]: I1208 17:48:11.168595 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2c92d64-3525-4675-bbe9-38bfe6dd4504" containerName="extract-utilities" Dec 08 17:48:11 crc kubenswrapper[5118]: I1208 17:48:11.168615 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="cb8303fe-2019-44f4-a124-af174b28cc02" containerName="registry-server" Dec 08 17:48:11 crc kubenswrapper[5118]: I1208 17:48:11.168631 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb8303fe-2019-44f4-a124-af174b28cc02" containerName="registry-server" Dec 08 17:48:11 crc kubenswrapper[5118]: I1208 17:48:11.168652 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9af82654-06bc-4376-bff5-d6adacce9785" containerName="marketplace-operator" Dec 08 17:48:11 crc kubenswrapper[5118]: I1208 17:48:11.168669 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="9af82654-06bc-4376-bff5-d6adacce9785" containerName="marketplace-operator" Dec 08 17:48:11 crc kubenswrapper[5118]: I1208 17:48:11.168701 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="fe8486ce-b0ff-43e5-b2a4-3e1d81feeebf" containerName="extract-utilities" Dec 08 17:48:11 crc kubenswrapper[5118]: I1208 17:48:11.168717 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe8486ce-b0ff-43e5-b2a4-3e1d81feeebf" containerName="extract-utilities" Dec 08 17:48:11 crc kubenswrapper[5118]: I1208 17:48:11.168739 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="fe8486ce-b0ff-43e5-b2a4-3e1d81feeebf" containerName="extract-content" Dec 08 17:48:11 crc kubenswrapper[5118]: I1208 17:48:11.168756 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe8486ce-b0ff-43e5-b2a4-3e1d81feeebf" containerName="extract-content" Dec 08 17:48:11 crc kubenswrapper[5118]: I1208 17:48:11.168783 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="fe8486ce-b0ff-43e5-b2a4-3e1d81feeebf" containerName="registry-server" Dec 08 17:48:11 crc kubenswrapper[5118]: I1208 17:48:11.168799 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe8486ce-b0ff-43e5-b2a4-3e1d81feeebf" containerName="registry-server" Dec 08 17:48:11 crc kubenswrapper[5118]: I1208 17:48:11.168833 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="fe467668-8954-4465-87ca-ef1d5f933d43" containerName="extract-utilities" Dec 08 17:48:11 crc kubenswrapper[5118]: I1208 17:48:11.168848 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe467668-8954-4465-87ca-ef1d5f933d43" containerName="extract-utilities" Dec 08 17:48:11 crc kubenswrapper[5118]: I1208 17:48:11.168866 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e2c92d64-3525-4675-bbe9-38bfe6dd4504" containerName="registry-server" Dec 08 17:48:11 crc kubenswrapper[5118]: I1208 17:48:11.168919 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2c92d64-3525-4675-bbe9-38bfe6dd4504" containerName="registry-server" Dec 08 17:48:11 crc kubenswrapper[5118]: I1208 17:48:11.168939 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9af82654-06bc-4376-bff5-d6adacce9785" containerName="marketplace-operator" Dec 08 17:48:11 crc kubenswrapper[5118]: I1208 17:48:11.168955 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="9af82654-06bc-4376-bff5-d6adacce9785" containerName="marketplace-operator" Dec 08 17:48:11 crc kubenswrapper[5118]: I1208 17:48:11.168981 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e2c92d64-3525-4675-bbe9-38bfe6dd4504" containerName="extract-content" Dec 08 17:48:11 crc kubenswrapper[5118]: I1208 17:48:11.168997 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2c92d64-3525-4675-bbe9-38bfe6dd4504" containerName="extract-content" Dec 08 17:48:11 crc kubenswrapper[5118]: I1208 17:48:11.169021 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="fe467668-8954-4465-87ca-ef1d5f933d43" containerName="extract-content" Dec 08 17:48:11 crc kubenswrapper[5118]: I1208 17:48:11.169036 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe467668-8954-4465-87ca-ef1d5f933d43" containerName="extract-content" Dec 08 17:48:11 crc kubenswrapper[5118]: I1208 17:48:11.169224 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="fe467668-8954-4465-87ca-ef1d5f933d43" containerName="registry-server" Dec 08 17:48:11 crc kubenswrapper[5118]: I1208 17:48:11.169251 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="fe8486ce-b0ff-43e5-b2a4-3e1d81feeebf" containerName="registry-server" Dec 08 17:48:11 crc kubenswrapper[5118]: I1208 17:48:11.169275 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="e2c92d64-3525-4675-bbe9-38bfe6dd4504" containerName="registry-server" Dec 08 17:48:11 crc kubenswrapper[5118]: I1208 17:48:11.169306 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="9af82654-06bc-4376-bff5-d6adacce9785" containerName="marketplace-operator" Dec 08 17:48:11 crc kubenswrapper[5118]: I1208 17:48:11.169328 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="9af82654-06bc-4376-bff5-d6adacce9785" containerName="marketplace-operator" Dec 08 17:48:11 crc kubenswrapper[5118]: I1208 17:48:11.169353 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="cb8303fe-2019-44f4-a124-af174b28cc02" containerName="registry-server" Dec 08 17:48:11 crc kubenswrapper[5118]: I1208 17:48:11.183797 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-58d6l"] Dec 08 17:48:11 crc kubenswrapper[5118]: I1208 17:48:11.183927 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-58d6l" Dec 08 17:48:11 crc kubenswrapper[5118]: I1208 17:48:11.186209 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Dec 08 17:48:11 crc kubenswrapper[5118]: I1208 17:48:11.252794 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af364a45-2b54-442a-b71a-4032d578bc89-catalog-content\") pod \"certified-operators-58d6l\" (UID: \"af364a45-2b54-442a-b71a-4032d578bc89\") " pod="openshift-marketplace/certified-operators-58d6l" Dec 08 17:48:11 crc kubenswrapper[5118]: I1208 17:48:11.253111 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af364a45-2b54-442a-b71a-4032d578bc89-utilities\") pod \"certified-operators-58d6l\" (UID: \"af364a45-2b54-442a-b71a-4032d578bc89\") " pod="openshift-marketplace/certified-operators-58d6l" Dec 08 17:48:11 crc kubenswrapper[5118]: I1208 17:48:11.253180 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wg2j\" (UniqueName: \"kubernetes.io/projected/af364a45-2b54-442a-b71a-4032d578bc89-kube-api-access-4wg2j\") pod \"certified-operators-58d6l\" (UID: \"af364a45-2b54-442a-b71a-4032d578bc89\") " pod="openshift-marketplace/certified-operators-58d6l" Dec 08 17:48:11 crc kubenswrapper[5118]: I1208 17:48:11.354580 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af364a45-2b54-442a-b71a-4032d578bc89-catalog-content\") pod \"certified-operators-58d6l\" (UID: \"af364a45-2b54-442a-b71a-4032d578bc89\") " pod="openshift-marketplace/certified-operators-58d6l" Dec 08 17:48:11 crc kubenswrapper[5118]: I1208 17:48:11.354683 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af364a45-2b54-442a-b71a-4032d578bc89-utilities\") pod \"certified-operators-58d6l\" (UID: \"af364a45-2b54-442a-b71a-4032d578bc89\") " pod="openshift-marketplace/certified-operators-58d6l" Dec 08 17:48:11 crc kubenswrapper[5118]: I1208 17:48:11.354714 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4wg2j\" (UniqueName: \"kubernetes.io/projected/af364a45-2b54-442a-b71a-4032d578bc89-kube-api-access-4wg2j\") pod \"certified-operators-58d6l\" (UID: \"af364a45-2b54-442a-b71a-4032d578bc89\") " pod="openshift-marketplace/certified-operators-58d6l" Dec 08 17:48:11 crc kubenswrapper[5118]: I1208 17:48:11.355262 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af364a45-2b54-442a-b71a-4032d578bc89-utilities\") pod \"certified-operators-58d6l\" (UID: \"af364a45-2b54-442a-b71a-4032d578bc89\") " pod="openshift-marketplace/certified-operators-58d6l" Dec 08 17:48:11 crc kubenswrapper[5118]: I1208 17:48:11.355317 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af364a45-2b54-442a-b71a-4032d578bc89-catalog-content\") pod \"certified-operators-58d6l\" (UID: \"af364a45-2b54-442a-b71a-4032d578bc89\") " pod="openshift-marketplace/certified-operators-58d6l" Dec 08 17:48:11 crc kubenswrapper[5118]: I1208 17:48:11.376742 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4wg2j\" (UniqueName: \"kubernetes.io/projected/af364a45-2b54-442a-b71a-4032d578bc89-kube-api-access-4wg2j\") pod \"certified-operators-58d6l\" (UID: \"af364a45-2b54-442a-b71a-4032d578bc89\") " pod="openshift-marketplace/certified-operators-58d6l" Dec 08 17:48:11 crc kubenswrapper[5118]: I1208 17:48:11.436674 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9af82654-06bc-4376-bff5-d6adacce9785" path="/var/lib/kubelet/pods/9af82654-06bc-4376-bff5-d6adacce9785/volumes" Dec 08 17:48:11 crc kubenswrapper[5118]: I1208 17:48:11.437436 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb8303fe-2019-44f4-a124-af174b28cc02" path="/var/lib/kubelet/pods/cb8303fe-2019-44f4-a124-af174b28cc02/volumes" Dec 08 17:48:11 crc kubenswrapper[5118]: I1208 17:48:11.438242 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2c92d64-3525-4675-bbe9-38bfe6dd4504" path="/var/lib/kubelet/pods/e2c92d64-3525-4675-bbe9-38bfe6dd4504/volumes" Dec 08 17:48:11 crc kubenswrapper[5118]: I1208 17:48:11.439544 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe467668-8954-4465-87ca-ef1d5f933d43" path="/var/lib/kubelet/pods/fe467668-8954-4465-87ca-ef1d5f933d43/volumes" Dec 08 17:48:11 crc kubenswrapper[5118]: I1208 17:48:11.440346 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe8486ce-b0ff-43e5-b2a4-3e1d81feeebf" path="/var/lib/kubelet/pods/fe8486ce-b0ff-43e5-b2a4-3e1d81feeebf/volumes" Dec 08 17:48:11 crc kubenswrapper[5118]: I1208 17:48:11.512398 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-58d6l" Dec 08 17:48:11 crc kubenswrapper[5118]: I1208 17:48:11.561449 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-6bbtn" Dec 08 17:48:11 crc kubenswrapper[5118]: I1208 17:48:11.919960 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-58d6l"] Dec 08 17:48:11 crc kubenswrapper[5118]: W1208 17:48:11.925113 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaf364a45_2b54_442a_b71a_4032d578bc89.slice/crio-6de3e2a5fd64a82fb7314939a1915d8ecbf9ba4a8c0d8b9710455241d403b89e WatchSource:0}: Error finding container 6de3e2a5fd64a82fb7314939a1915d8ecbf9ba4a8c0d8b9710455241d403b89e: Status 404 returned error can't find the container with id 6de3e2a5fd64a82fb7314939a1915d8ecbf9ba4a8c0d8b9710455241d403b89e Dec 08 17:48:12 crc kubenswrapper[5118]: I1208 17:48:12.158927 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-xp5vr"] Dec 08 17:48:12 crc kubenswrapper[5118]: I1208 17:48:12.166586 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xp5vr"] Dec 08 17:48:12 crc kubenswrapper[5118]: I1208 17:48:12.166719 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xp5vr" Dec 08 17:48:12 crc kubenswrapper[5118]: I1208 17:48:12.170692 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Dec 08 17:48:12 crc kubenswrapper[5118]: I1208 17:48:12.268057 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9416e49-5134-45de-9eeb-a15be7fdbf63-utilities\") pod \"redhat-marketplace-xp5vr\" (UID: \"c9416e49-5134-45de-9eeb-a15be7fdbf63\") " pod="openshift-marketplace/redhat-marketplace-xp5vr" Dec 08 17:48:12 crc kubenswrapper[5118]: I1208 17:48:12.268137 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jn9pg\" (UniqueName: \"kubernetes.io/projected/c9416e49-5134-45de-9eeb-a15be7fdbf63-kube-api-access-jn9pg\") pod \"redhat-marketplace-xp5vr\" (UID: \"c9416e49-5134-45de-9eeb-a15be7fdbf63\") " pod="openshift-marketplace/redhat-marketplace-xp5vr" Dec 08 17:48:12 crc kubenswrapper[5118]: I1208 17:48:12.268213 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9416e49-5134-45de-9eeb-a15be7fdbf63-catalog-content\") pod \"redhat-marketplace-xp5vr\" (UID: \"c9416e49-5134-45de-9eeb-a15be7fdbf63\") " pod="openshift-marketplace/redhat-marketplace-xp5vr" Dec 08 17:48:12 crc kubenswrapper[5118]: I1208 17:48:12.369779 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9416e49-5134-45de-9eeb-a15be7fdbf63-catalog-content\") pod \"redhat-marketplace-xp5vr\" (UID: \"c9416e49-5134-45de-9eeb-a15be7fdbf63\") " pod="openshift-marketplace/redhat-marketplace-xp5vr" Dec 08 17:48:12 crc kubenswrapper[5118]: I1208 17:48:12.369850 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9416e49-5134-45de-9eeb-a15be7fdbf63-utilities\") pod \"redhat-marketplace-xp5vr\" (UID: \"c9416e49-5134-45de-9eeb-a15be7fdbf63\") " pod="openshift-marketplace/redhat-marketplace-xp5vr" Dec 08 17:48:12 crc kubenswrapper[5118]: I1208 17:48:12.369921 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jn9pg\" (UniqueName: \"kubernetes.io/projected/c9416e49-5134-45de-9eeb-a15be7fdbf63-kube-api-access-jn9pg\") pod \"redhat-marketplace-xp5vr\" (UID: \"c9416e49-5134-45de-9eeb-a15be7fdbf63\") " pod="openshift-marketplace/redhat-marketplace-xp5vr" Dec 08 17:48:12 crc kubenswrapper[5118]: I1208 17:48:12.370688 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9416e49-5134-45de-9eeb-a15be7fdbf63-catalog-content\") pod \"redhat-marketplace-xp5vr\" (UID: \"c9416e49-5134-45de-9eeb-a15be7fdbf63\") " pod="openshift-marketplace/redhat-marketplace-xp5vr" Dec 08 17:48:12 crc kubenswrapper[5118]: I1208 17:48:12.370969 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9416e49-5134-45de-9eeb-a15be7fdbf63-utilities\") pod \"redhat-marketplace-xp5vr\" (UID: \"c9416e49-5134-45de-9eeb-a15be7fdbf63\") " pod="openshift-marketplace/redhat-marketplace-xp5vr" Dec 08 17:48:12 crc kubenswrapper[5118]: I1208 17:48:12.392391 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jn9pg\" (UniqueName: \"kubernetes.io/projected/c9416e49-5134-45de-9eeb-a15be7fdbf63-kube-api-access-jn9pg\") pod \"redhat-marketplace-xp5vr\" (UID: \"c9416e49-5134-45de-9eeb-a15be7fdbf63\") " pod="openshift-marketplace/redhat-marketplace-xp5vr" Dec 08 17:48:12 crc kubenswrapper[5118]: I1208 17:48:12.484015 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xp5vr" Dec 08 17:48:12 crc kubenswrapper[5118]: I1208 17:48:12.563594 5118 generic.go:358] "Generic (PLEG): container finished" podID="af364a45-2b54-442a-b71a-4032d578bc89" containerID="dfdcac4d59aed18f1c9a483754b21cdf400db6234d4a63fc703bea5c07cf3eb8" exitCode=0 Dec 08 17:48:12 crc kubenswrapper[5118]: I1208 17:48:12.563762 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-58d6l" event={"ID":"af364a45-2b54-442a-b71a-4032d578bc89","Type":"ContainerDied","Data":"dfdcac4d59aed18f1c9a483754b21cdf400db6234d4a63fc703bea5c07cf3eb8"} Dec 08 17:48:12 crc kubenswrapper[5118]: I1208 17:48:12.563950 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-58d6l" event={"ID":"af364a45-2b54-442a-b71a-4032d578bc89","Type":"ContainerStarted","Data":"6de3e2a5fd64a82fb7314939a1915d8ecbf9ba4a8c0d8b9710455241d403b89e"} Dec 08 17:48:12 crc kubenswrapper[5118]: I1208 17:48:12.739762 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xp5vr"] Dec 08 17:48:12 crc kubenswrapper[5118]: W1208 17:48:12.750795 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc9416e49_5134_45de_9eeb_a15be7fdbf63.slice/crio-c11f84302bfe3264cf3e55e89a65907964bdd273130b6ff7fe1c6969648837c5 WatchSource:0}: Error finding container c11f84302bfe3264cf3e55e89a65907964bdd273130b6ff7fe1c6969648837c5: Status 404 returned error can't find the container with id c11f84302bfe3264cf3e55e89a65907964bdd273130b6ff7fe1c6969648837c5 Dec 08 17:48:13 crc kubenswrapper[5118]: I1208 17:48:13.555545 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-xpnf9"] Dec 08 17:48:13 crc kubenswrapper[5118]: I1208 17:48:13.559703 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xpnf9" Dec 08 17:48:13 crc kubenswrapper[5118]: I1208 17:48:13.563476 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Dec 08 17:48:13 crc kubenswrapper[5118]: I1208 17:48:13.566601 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xpnf9"] Dec 08 17:48:13 crc kubenswrapper[5118]: I1208 17:48:13.571127 5118 generic.go:358] "Generic (PLEG): container finished" podID="c9416e49-5134-45de-9eeb-a15be7fdbf63" containerID="15abf83ac167b026161991b2c26d6b4469f123748db4a90d01e99f1268ea71ff" exitCode=0 Dec 08 17:48:13 crc kubenswrapper[5118]: I1208 17:48:13.571209 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xp5vr" event={"ID":"c9416e49-5134-45de-9eeb-a15be7fdbf63","Type":"ContainerDied","Data":"15abf83ac167b026161991b2c26d6b4469f123748db4a90d01e99f1268ea71ff"} Dec 08 17:48:13 crc kubenswrapper[5118]: I1208 17:48:13.571239 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xp5vr" event={"ID":"c9416e49-5134-45de-9eeb-a15be7fdbf63","Type":"ContainerStarted","Data":"c11f84302bfe3264cf3e55e89a65907964bdd273130b6ff7fe1c6969648837c5"} Dec 08 17:48:13 crc kubenswrapper[5118]: I1208 17:48:13.574020 5118 generic.go:358] "Generic (PLEG): container finished" podID="af364a45-2b54-442a-b71a-4032d578bc89" containerID="45b18931cb9294a076526f17865fb22d0df3d32bb20a0c0814b7b47a9bd0058a" exitCode=0 Dec 08 17:48:13 crc kubenswrapper[5118]: I1208 17:48:13.574792 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-58d6l" event={"ID":"af364a45-2b54-442a-b71a-4032d578bc89","Type":"ContainerDied","Data":"45b18931cb9294a076526f17865fb22d0df3d32bb20a0c0814b7b47a9bd0058a"} Dec 08 17:48:13 crc kubenswrapper[5118]: I1208 17:48:13.685763 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/259174f2-efbe-4b44-ae95-b0d2f2865ab9-catalog-content\") pod \"redhat-operators-xpnf9\" (UID: \"259174f2-efbe-4b44-ae95-b0d2f2865ab9\") " pod="openshift-marketplace/redhat-operators-xpnf9" Dec 08 17:48:13 crc kubenswrapper[5118]: I1208 17:48:13.685996 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/259174f2-efbe-4b44-ae95-b0d2f2865ab9-utilities\") pod \"redhat-operators-xpnf9\" (UID: \"259174f2-efbe-4b44-ae95-b0d2f2865ab9\") " pod="openshift-marketplace/redhat-operators-xpnf9" Dec 08 17:48:13 crc kubenswrapper[5118]: I1208 17:48:13.686070 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxpq7\" (UniqueName: \"kubernetes.io/projected/259174f2-efbe-4b44-ae95-b0d2f2865ab9-kube-api-access-gxpq7\") pod \"redhat-operators-xpnf9\" (UID: \"259174f2-efbe-4b44-ae95-b0d2f2865ab9\") " pod="openshift-marketplace/redhat-operators-xpnf9" Dec 08 17:48:13 crc kubenswrapper[5118]: I1208 17:48:13.787859 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/259174f2-efbe-4b44-ae95-b0d2f2865ab9-catalog-content\") pod \"redhat-operators-xpnf9\" (UID: \"259174f2-efbe-4b44-ae95-b0d2f2865ab9\") " pod="openshift-marketplace/redhat-operators-xpnf9" Dec 08 17:48:13 crc kubenswrapper[5118]: I1208 17:48:13.787969 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/259174f2-efbe-4b44-ae95-b0d2f2865ab9-utilities\") pod \"redhat-operators-xpnf9\" (UID: \"259174f2-efbe-4b44-ae95-b0d2f2865ab9\") " pod="openshift-marketplace/redhat-operators-xpnf9" Dec 08 17:48:13 crc kubenswrapper[5118]: I1208 17:48:13.788001 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gxpq7\" (UniqueName: \"kubernetes.io/projected/259174f2-efbe-4b44-ae95-b0d2f2865ab9-kube-api-access-gxpq7\") pod \"redhat-operators-xpnf9\" (UID: \"259174f2-efbe-4b44-ae95-b0d2f2865ab9\") " pod="openshift-marketplace/redhat-operators-xpnf9" Dec 08 17:48:13 crc kubenswrapper[5118]: I1208 17:48:13.788452 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/259174f2-efbe-4b44-ae95-b0d2f2865ab9-catalog-content\") pod \"redhat-operators-xpnf9\" (UID: \"259174f2-efbe-4b44-ae95-b0d2f2865ab9\") " pod="openshift-marketplace/redhat-operators-xpnf9" Dec 08 17:48:13 crc kubenswrapper[5118]: I1208 17:48:13.789134 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/259174f2-efbe-4b44-ae95-b0d2f2865ab9-utilities\") pod \"redhat-operators-xpnf9\" (UID: \"259174f2-efbe-4b44-ae95-b0d2f2865ab9\") " pod="openshift-marketplace/redhat-operators-xpnf9" Dec 08 17:48:13 crc kubenswrapper[5118]: I1208 17:48:13.814082 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxpq7\" (UniqueName: \"kubernetes.io/projected/259174f2-efbe-4b44-ae95-b0d2f2865ab9-kube-api-access-gxpq7\") pod \"redhat-operators-xpnf9\" (UID: \"259174f2-efbe-4b44-ae95-b0d2f2865ab9\") " pod="openshift-marketplace/redhat-operators-xpnf9" Dec 08 17:48:13 crc kubenswrapper[5118]: I1208 17:48:13.899620 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xpnf9" Dec 08 17:48:14 crc kubenswrapper[5118]: I1208 17:48:14.332533 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xpnf9"] Dec 08 17:48:14 crc kubenswrapper[5118]: W1208 17:48:14.335992 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod259174f2_efbe_4b44_ae95_b0d2f2865ab9.slice/crio-2e369e900be26b57b9f7a1bc5cab886fe858f0af35227f2d72416c136d57cef3 WatchSource:0}: Error finding container 2e369e900be26b57b9f7a1bc5cab886fe858f0af35227f2d72416c136d57cef3: Status 404 returned error can't find the container with id 2e369e900be26b57b9f7a1bc5cab886fe858f0af35227f2d72416c136d57cef3 Dec 08 17:48:14 crc kubenswrapper[5118]: I1208 17:48:14.551622 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-zdvxg"] Dec 08 17:48:14 crc kubenswrapper[5118]: I1208 17:48:14.559860 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zdvxg" Dec 08 17:48:14 crc kubenswrapper[5118]: I1208 17:48:14.563029 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Dec 08 17:48:14 crc kubenswrapper[5118]: I1208 17:48:14.566118 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zdvxg"] Dec 08 17:48:14 crc kubenswrapper[5118]: I1208 17:48:14.585636 5118 generic.go:358] "Generic (PLEG): container finished" podID="c9416e49-5134-45de-9eeb-a15be7fdbf63" containerID="b8e08b1bcf5296444229869ccbb1d5e6b8236afceeff928c15285308a30d17bb" exitCode=0 Dec 08 17:48:14 crc kubenswrapper[5118]: I1208 17:48:14.585732 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xp5vr" event={"ID":"c9416e49-5134-45de-9eeb-a15be7fdbf63","Type":"ContainerDied","Data":"b8e08b1bcf5296444229869ccbb1d5e6b8236afceeff928c15285308a30d17bb"} Dec 08 17:48:14 crc kubenswrapper[5118]: I1208 17:48:14.592258 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-58d6l" event={"ID":"af364a45-2b54-442a-b71a-4032d578bc89","Type":"ContainerStarted","Data":"c8dbf860838a1235bbef012223e2756f33aa5484a640d375d4dfaff7512b3632"} Dec 08 17:48:14 crc kubenswrapper[5118]: I1208 17:48:14.593543 5118 generic.go:358] "Generic (PLEG): container finished" podID="259174f2-efbe-4b44-ae95-b0d2f2865ab9" containerID="3d0f4c8962f491a52cab1196ffbcc10e41386b8acb275de3d62e011f2c313367" exitCode=0 Dec 08 17:48:14 crc kubenswrapper[5118]: I1208 17:48:14.593675 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xpnf9" event={"ID":"259174f2-efbe-4b44-ae95-b0d2f2865ab9","Type":"ContainerDied","Data":"3d0f4c8962f491a52cab1196ffbcc10e41386b8acb275de3d62e011f2c313367"} Dec 08 17:48:14 crc kubenswrapper[5118]: I1208 17:48:14.593728 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xpnf9" event={"ID":"259174f2-efbe-4b44-ae95-b0d2f2865ab9","Type":"ContainerStarted","Data":"2e369e900be26b57b9f7a1bc5cab886fe858f0af35227f2d72416c136d57cef3"} Dec 08 17:48:14 crc kubenswrapper[5118]: I1208 17:48:14.629969 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-58d6l" podStartSLOduration=2.922422752 podStartE2EDuration="3.629950607s" podCreationTimestamp="2025-12-08 17:48:11 +0000 UTC" firstStartedPulling="2025-12-08 17:48:12.565227828 +0000 UTC m=+349.466551962" lastFinishedPulling="2025-12-08 17:48:13.272755723 +0000 UTC m=+350.174079817" observedRunningTime="2025-12-08 17:48:14.628601245 +0000 UTC m=+351.529925359" watchObservedRunningTime="2025-12-08 17:48:14.629950607 +0000 UTC m=+351.531274701" Dec 08 17:48:14 crc kubenswrapper[5118]: I1208 17:48:14.701193 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8tfk\" (UniqueName: \"kubernetes.io/projected/a52a5ff3-1e70-4b19-b013-95206cae40fc-kube-api-access-d8tfk\") pod \"community-operators-zdvxg\" (UID: \"a52a5ff3-1e70-4b19-b013-95206cae40fc\") " pod="openshift-marketplace/community-operators-zdvxg" Dec 08 17:48:14 crc kubenswrapper[5118]: I1208 17:48:14.701551 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a52a5ff3-1e70-4b19-b013-95206cae40fc-utilities\") pod \"community-operators-zdvxg\" (UID: \"a52a5ff3-1e70-4b19-b013-95206cae40fc\") " pod="openshift-marketplace/community-operators-zdvxg" Dec 08 17:48:14 crc kubenswrapper[5118]: I1208 17:48:14.701614 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a52a5ff3-1e70-4b19-b013-95206cae40fc-catalog-content\") pod \"community-operators-zdvxg\" (UID: \"a52a5ff3-1e70-4b19-b013-95206cae40fc\") " pod="openshift-marketplace/community-operators-zdvxg" Dec 08 17:48:14 crc kubenswrapper[5118]: I1208 17:48:14.802610 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a52a5ff3-1e70-4b19-b013-95206cae40fc-utilities\") pod \"community-operators-zdvxg\" (UID: \"a52a5ff3-1e70-4b19-b013-95206cae40fc\") " pod="openshift-marketplace/community-operators-zdvxg" Dec 08 17:48:14 crc kubenswrapper[5118]: I1208 17:48:14.802834 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a52a5ff3-1e70-4b19-b013-95206cae40fc-catalog-content\") pod \"community-operators-zdvxg\" (UID: \"a52a5ff3-1e70-4b19-b013-95206cae40fc\") " pod="openshift-marketplace/community-operators-zdvxg" Dec 08 17:48:14 crc kubenswrapper[5118]: I1208 17:48:14.802966 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-d8tfk\" (UniqueName: \"kubernetes.io/projected/a52a5ff3-1e70-4b19-b013-95206cae40fc-kube-api-access-d8tfk\") pod \"community-operators-zdvxg\" (UID: \"a52a5ff3-1e70-4b19-b013-95206cae40fc\") " pod="openshift-marketplace/community-operators-zdvxg" Dec 08 17:48:14 crc kubenswrapper[5118]: I1208 17:48:14.803102 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a52a5ff3-1e70-4b19-b013-95206cae40fc-utilities\") pod \"community-operators-zdvxg\" (UID: \"a52a5ff3-1e70-4b19-b013-95206cae40fc\") " pod="openshift-marketplace/community-operators-zdvxg" Dec 08 17:48:14 crc kubenswrapper[5118]: I1208 17:48:14.803215 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a52a5ff3-1e70-4b19-b013-95206cae40fc-catalog-content\") pod \"community-operators-zdvxg\" (UID: \"a52a5ff3-1e70-4b19-b013-95206cae40fc\") " pod="openshift-marketplace/community-operators-zdvxg" Dec 08 17:48:14 crc kubenswrapper[5118]: I1208 17:48:14.828654 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8tfk\" (UniqueName: \"kubernetes.io/projected/a52a5ff3-1e70-4b19-b013-95206cae40fc-kube-api-access-d8tfk\") pod \"community-operators-zdvxg\" (UID: \"a52a5ff3-1e70-4b19-b013-95206cae40fc\") " pod="openshift-marketplace/community-operators-zdvxg" Dec 08 17:48:14 crc kubenswrapper[5118]: I1208 17:48:14.880800 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zdvxg" Dec 08 17:48:15 crc kubenswrapper[5118]: I1208 17:48:15.078139 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zdvxg"] Dec 08 17:48:15 crc kubenswrapper[5118]: I1208 17:48:15.602211 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xpnf9" event={"ID":"259174f2-efbe-4b44-ae95-b0d2f2865ab9","Type":"ContainerStarted","Data":"263fce943fcc19850127cc567b2fba9042c7eca09df2b1f475063107470ddb75"} Dec 08 17:48:15 crc kubenswrapper[5118]: I1208 17:48:15.604361 5118 generic.go:358] "Generic (PLEG): container finished" podID="a52a5ff3-1e70-4b19-b013-95206cae40fc" containerID="d37c87abdb0c90b0a7d55f612e49c42e8c3e6c69b8a6d97720639b4588adaaa9" exitCode=0 Dec 08 17:48:15 crc kubenswrapper[5118]: I1208 17:48:15.605910 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zdvxg" event={"ID":"a52a5ff3-1e70-4b19-b013-95206cae40fc","Type":"ContainerDied","Data":"d37c87abdb0c90b0a7d55f612e49c42e8c3e6c69b8a6d97720639b4588adaaa9"} Dec 08 17:48:15 crc kubenswrapper[5118]: I1208 17:48:15.605970 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zdvxg" event={"ID":"a52a5ff3-1e70-4b19-b013-95206cae40fc","Type":"ContainerStarted","Data":"8808a88a8b0d81bcfadb7a2fb65038f9c08c99f71348b280e6aee0991c20edd9"} Dec 08 17:48:15 crc kubenswrapper[5118]: I1208 17:48:15.617532 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xp5vr" event={"ID":"c9416e49-5134-45de-9eeb-a15be7fdbf63","Type":"ContainerStarted","Data":"0a7c833b3c15ad97f73ff137d4555fe46e5a82ea584c876d53847d21edb4686c"} Dec 08 17:48:15 crc kubenswrapper[5118]: I1208 17:48:15.673167 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-xp5vr" podStartSLOduration=3.086793938 podStartE2EDuration="3.673153289s" podCreationTimestamp="2025-12-08 17:48:12 +0000 UTC" firstStartedPulling="2025-12-08 17:48:13.571894104 +0000 UTC m=+350.473218198" lastFinishedPulling="2025-12-08 17:48:14.158253455 +0000 UTC m=+351.059577549" observedRunningTime="2025-12-08 17:48:15.671982774 +0000 UTC m=+352.573306888" watchObservedRunningTime="2025-12-08 17:48:15.673153289 +0000 UTC m=+352.574477393" Dec 08 17:48:16 crc kubenswrapper[5118]: I1208 17:48:16.621696 5118 generic.go:358] "Generic (PLEG): container finished" podID="259174f2-efbe-4b44-ae95-b0d2f2865ab9" containerID="263fce943fcc19850127cc567b2fba9042c7eca09df2b1f475063107470ddb75" exitCode=0 Dec 08 17:48:16 crc kubenswrapper[5118]: I1208 17:48:16.622118 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xpnf9" event={"ID":"259174f2-efbe-4b44-ae95-b0d2f2865ab9","Type":"ContainerDied","Data":"263fce943fcc19850127cc567b2fba9042c7eca09df2b1f475063107470ddb75"} Dec 08 17:48:16 crc kubenswrapper[5118]: I1208 17:48:16.625522 5118 generic.go:358] "Generic (PLEG): container finished" podID="a52a5ff3-1e70-4b19-b013-95206cae40fc" containerID="0b8ce98765c583b646786c3ba9e56320e1d5be5966af9776c70f661c9ba9fde1" exitCode=0 Dec 08 17:48:16 crc kubenswrapper[5118]: I1208 17:48:16.625837 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zdvxg" event={"ID":"a52a5ff3-1e70-4b19-b013-95206cae40fc","Type":"ContainerDied","Data":"0b8ce98765c583b646786c3ba9e56320e1d5be5966af9776c70f661c9ba9fde1"} Dec 08 17:48:17 crc kubenswrapper[5118]: I1208 17:48:17.632612 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xpnf9" event={"ID":"259174f2-efbe-4b44-ae95-b0d2f2865ab9","Type":"ContainerStarted","Data":"8deb65eee49f010f7f62d0bcecc121708f36ec481708b7c5befa86871164050f"} Dec 08 17:48:17 crc kubenswrapper[5118]: I1208 17:48:17.634711 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zdvxg" event={"ID":"a52a5ff3-1e70-4b19-b013-95206cae40fc","Type":"ContainerStarted","Data":"260a760494a2b670798f34fcbfa2ee577375104ed3fef77d31d605ff1b4806f2"} Dec 08 17:48:17 crc kubenswrapper[5118]: I1208 17:48:17.651511 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-xpnf9" podStartSLOduration=3.938945236 podStartE2EDuration="4.651482744s" podCreationTimestamp="2025-12-08 17:48:13 +0000 UTC" firstStartedPulling="2025-12-08 17:48:14.600054712 +0000 UTC m=+351.501378806" lastFinishedPulling="2025-12-08 17:48:15.31259222 +0000 UTC m=+352.213916314" observedRunningTime="2025-12-08 17:48:17.650113913 +0000 UTC m=+354.551438027" watchObservedRunningTime="2025-12-08 17:48:17.651482744 +0000 UTC m=+354.552806838" Dec 08 17:48:17 crc kubenswrapper[5118]: I1208 17:48:17.672176 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-zdvxg" podStartSLOduration=3.166909205 podStartE2EDuration="3.672160021s" podCreationTimestamp="2025-12-08 17:48:14 +0000 UTC" firstStartedPulling="2025-12-08 17:48:15.60575803 +0000 UTC m=+352.507082124" lastFinishedPulling="2025-12-08 17:48:16.111008846 +0000 UTC m=+353.012332940" observedRunningTime="2025-12-08 17:48:17.669442898 +0000 UTC m=+354.570767002" watchObservedRunningTime="2025-12-08 17:48:17.672160021 +0000 UTC m=+354.573484115" Dec 08 17:48:21 crc kubenswrapper[5118]: I1208 17:48:21.512971 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-58d6l" Dec 08 17:48:21 crc kubenswrapper[5118]: I1208 17:48:21.513679 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-58d6l" Dec 08 17:48:21 crc kubenswrapper[5118]: I1208 17:48:21.566594 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-58d6l" Dec 08 17:48:21 crc kubenswrapper[5118]: I1208 17:48:21.691090 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-58d6l" Dec 08 17:48:22 crc kubenswrapper[5118]: I1208 17:48:22.485252 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-xp5vr" Dec 08 17:48:22 crc kubenswrapper[5118]: I1208 17:48:22.485307 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-xp5vr" Dec 08 17:48:22 crc kubenswrapper[5118]: I1208 17:48:22.540240 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-xp5vr" Dec 08 17:48:22 crc kubenswrapper[5118]: I1208 17:48:22.690562 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-xp5vr" Dec 08 17:48:23 crc kubenswrapper[5118]: I1208 17:48:23.900429 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-xpnf9" Dec 08 17:48:23 crc kubenswrapper[5118]: I1208 17:48:23.901395 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-xpnf9" Dec 08 17:48:23 crc kubenswrapper[5118]: I1208 17:48:23.938819 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-xpnf9" Dec 08 17:48:24 crc kubenswrapper[5118]: I1208 17:48:24.717354 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-xpnf9" Dec 08 17:48:24 crc kubenswrapper[5118]: I1208 17:48:24.881342 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-zdvxg" Dec 08 17:48:24 crc kubenswrapper[5118]: I1208 17:48:24.881395 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-zdvxg" Dec 08 17:48:24 crc kubenswrapper[5118]: I1208 17:48:24.919827 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-zdvxg" Dec 08 17:48:25 crc kubenswrapper[5118]: I1208 17:48:25.710742 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-zdvxg" Dec 08 17:50:01 crc kubenswrapper[5118]: I1208 17:50:01.963052 5118 patch_prober.go:28] interesting pod/machine-config-daemon-8vxnt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 17:50:01 crc kubenswrapper[5118]: I1208 17:50:01.963737 5118 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8vxnt" podUID="cee6a3dc-47d4-4996-9c78-cb6c6b626d71" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 17:50:31 crc kubenswrapper[5118]: I1208 17:50:31.963109 5118 patch_prober.go:28] interesting pod/machine-config-daemon-8vxnt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 17:50:31 crc kubenswrapper[5118]: I1208 17:50:31.963935 5118 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8vxnt" podUID="cee6a3dc-47d4-4996-9c78-cb6c6b626d71" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 17:51:01 crc kubenswrapper[5118]: I1208 17:51:01.962110 5118 patch_prober.go:28] interesting pod/machine-config-daemon-8vxnt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 17:51:01 crc kubenswrapper[5118]: I1208 17:51:01.962658 5118 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8vxnt" podUID="cee6a3dc-47d4-4996-9c78-cb6c6b626d71" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 17:51:01 crc kubenswrapper[5118]: I1208 17:51:01.962706 5118 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8vxnt" Dec 08 17:51:01 crc kubenswrapper[5118]: I1208 17:51:01.963335 5118 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b0ca934293bb401de268428d32fee96419e1934766145fbcb973b04a905f6519"} pod="openshift-machine-config-operator/machine-config-daemon-8vxnt" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 08 17:51:01 crc kubenswrapper[5118]: I1208 17:51:01.963393 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8vxnt" podUID="cee6a3dc-47d4-4996-9c78-cb6c6b626d71" containerName="machine-config-daemon" containerID="cri-o://b0ca934293bb401de268428d32fee96419e1934766145fbcb973b04a905f6519" gracePeriod=600 Dec 08 17:51:02 crc kubenswrapper[5118]: I1208 17:51:02.668805 5118 generic.go:358] "Generic (PLEG): container finished" podID="cee6a3dc-47d4-4996-9c78-cb6c6b626d71" containerID="b0ca934293bb401de268428d32fee96419e1934766145fbcb973b04a905f6519" exitCode=0 Dec 08 17:51:02 crc kubenswrapper[5118]: I1208 17:51:02.668926 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8vxnt" event={"ID":"cee6a3dc-47d4-4996-9c78-cb6c6b626d71","Type":"ContainerDied","Data":"b0ca934293bb401de268428d32fee96419e1934766145fbcb973b04a905f6519"} Dec 08 17:51:02 crc kubenswrapper[5118]: I1208 17:51:02.669316 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8vxnt" event={"ID":"cee6a3dc-47d4-4996-9c78-cb6c6b626d71","Type":"ContainerStarted","Data":"e127f5f6ea947945bd90450d12f167e6419e8af6b0458b462fdc7e8064751458"} Dec 08 17:51:02 crc kubenswrapper[5118]: I1208 17:51:02.669341 5118 scope.go:117] "RemoveContainer" containerID="d85cd5695eb2bfdc0550d3965b70689a69b9c315b96786c2d8f2213d1fc4d407" Dec 08 17:52:23 crc kubenswrapper[5118]: I1208 17:52:23.644787 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Dec 08 17:52:23 crc kubenswrapper[5118]: I1208 17:52:23.645049 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Dec 08 17:52:36 crc kubenswrapper[5118]: I1208 17:52:36.998202 5118 ???:1] "http: TLS handshake error from 192.168.126.11:39076: no serving certificate available for the kubelet" Dec 08 17:53:31 crc kubenswrapper[5118]: I1208 17:53:31.962567 5118 patch_prober.go:28] interesting pod/machine-config-daemon-8vxnt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 17:53:31 crc kubenswrapper[5118]: I1208 17:53:31.963267 5118 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8vxnt" podUID="cee6a3dc-47d4-4996-9c78-cb6c6b626d71" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.141075 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-x68jp"] Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.142386 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-x68jp" podUID="94c49b3d-dce8-4a73-895a-32a521a06b22" containerName="kube-rbac-proxy" containerID="cri-o://a5876b9a1fb17784bd002aa72e8228c02623f890ab1a65b0b7bf1f47a9b4dad1" gracePeriod=30 Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.143019 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-x68jp" podUID="94c49b3d-dce8-4a73-895a-32a521a06b22" containerName="ovnkube-cluster-manager" containerID="cri-o://7f99e1c04090370d3c6660cfd043ab828963f2dd41095e28df8b18683736ff50" gracePeriod=30 Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.341014 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-x68jp" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.341310 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-wr4x4"] Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.342038 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-wr4x4" podUID="cf2ef9c8-a4c9-4273-9236-43ea37f1e277" containerName="ovn-controller" containerID="cri-o://6c02426ea159bbe26cc8542b53a59e18f979d900d431361620bba4814036f665" gracePeriod=30 Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.342062 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-wr4x4" podUID="cf2ef9c8-a4c9-4273-9236-43ea37f1e277" containerName="northd" containerID="cri-o://3bff84e1b7624dd37333ecc0931e980c01f0213323f41dbfb457726159b771a7" gracePeriod=30 Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.342124 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-wr4x4" podUID="cf2ef9c8-a4c9-4273-9236-43ea37f1e277" containerName="kube-rbac-proxy-node" containerID="cri-o://a115df047e5c59266fc13d616ad015150425f0493f706d59f2346747e8a779e4" gracePeriod=30 Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.342155 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-wr4x4" podUID="cf2ef9c8-a4c9-4273-9236-43ea37f1e277" containerName="nbdb" containerID="cri-o://2f02199ab806ead16e41b396d0822ce0742a12d090da6ddeafad84473f0df5b1" gracePeriod=30 Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.342192 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-wr4x4" podUID="cf2ef9c8-a4c9-4273-9236-43ea37f1e277" containerName="ovn-acl-logging" containerID="cri-o://94a3e70f9f409de1b5a37032e8595edbddd745578a1ca5c5880941e50c94d100" gracePeriod=30 Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.342227 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-wr4x4" podUID="cf2ef9c8-a4c9-4273-9236-43ea37f1e277" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://90a647427f6adcc0a95e3b5049458fc47738fe641231fc03b40ccc3c95400786" gracePeriod=30 Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.342266 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-wr4x4" podUID="cf2ef9c8-a4c9-4273-9236-43ea37f1e277" containerName="sbdb" containerID="cri-o://3273cab25da83878f46c4a13c311a3c4fc6db1eebc9dad9ad1ad5373511dfee7" gracePeriod=30 Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.370430 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-lfp2m"] Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.371151 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="94c49b3d-dce8-4a73-895a-32a521a06b22" containerName="ovnkube-cluster-manager" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.373008 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="94c49b3d-dce8-4a73-895a-32a521a06b22" containerName="ovnkube-cluster-manager" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.373053 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="94c49b3d-dce8-4a73-895a-32a521a06b22" containerName="kube-rbac-proxy" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.373065 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="94c49b3d-dce8-4a73-895a-32a521a06b22" containerName="kube-rbac-proxy" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.373327 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="94c49b3d-dce8-4a73-895a-32a521a06b22" containerName="ovnkube-cluster-manager" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.373351 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="94c49b3d-dce8-4a73-895a-32a521a06b22" containerName="kube-rbac-proxy" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.376644 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-lfp2m" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.385413 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-wr4x4" podUID="cf2ef9c8-a4c9-4273-9236-43ea37f1e277" containerName="ovnkube-controller" containerID="cri-o://9ad93d634d87a7a7587fe23f469ba02f057130c392df5c1071a94fb11c762b67" gracePeriod=30 Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.472538 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s7svb\" (UniqueName: \"kubernetes.io/projected/94c49b3d-dce8-4a73-895a-32a521a06b22-kube-api-access-s7svb\") pod \"94c49b3d-dce8-4a73-895a-32a521a06b22\" (UID: \"94c49b3d-dce8-4a73-895a-32a521a06b22\") " Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.472644 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/94c49b3d-dce8-4a73-895a-32a521a06b22-ovnkube-config\") pod \"94c49b3d-dce8-4a73-895a-32a521a06b22\" (UID: \"94c49b3d-dce8-4a73-895a-32a521a06b22\") " Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.472712 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/94c49b3d-dce8-4a73-895a-32a521a06b22-ovn-control-plane-metrics-cert\") pod \"94c49b3d-dce8-4a73-895a-32a521a06b22\" (UID: \"94c49b3d-dce8-4a73-895a-32a521a06b22\") " Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.472755 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/94c49b3d-dce8-4a73-895a-32a521a06b22-env-overrides\") pod \"94c49b3d-dce8-4a73-895a-32a521a06b22\" (UID: \"94c49b3d-dce8-4a73-895a-32a521a06b22\") " Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.473135 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8105d3ef-5e53-4418-9d0c-12f9b6ffa67f-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-lfp2m\" (UID: \"8105d3ef-5e53-4418-9d0c-12f9b6ffa67f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-lfp2m" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.473194 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8105d3ef-5e53-4418-9d0c-12f9b6ffa67f-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-lfp2m\" (UID: \"8105d3ef-5e53-4418-9d0c-12f9b6ffa67f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-lfp2m" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.473215 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqtn6\" (UniqueName: \"kubernetes.io/projected/8105d3ef-5e53-4418-9d0c-12f9b6ffa67f-kube-api-access-dqtn6\") pod \"ovnkube-control-plane-97c9b6c48-lfp2m\" (UID: \"8105d3ef-5e53-4418-9d0c-12f9b6ffa67f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-lfp2m" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.473272 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8105d3ef-5e53-4418-9d0c-12f9b6ffa67f-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-lfp2m\" (UID: \"8105d3ef-5e53-4418-9d0c-12f9b6ffa67f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-lfp2m" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.474141 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94c49b3d-dce8-4a73-895a-32a521a06b22-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "94c49b3d-dce8-4a73-895a-32a521a06b22" (UID: "94c49b3d-dce8-4a73-895a-32a521a06b22"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.474174 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94c49b3d-dce8-4a73-895a-32a521a06b22-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "94c49b3d-dce8-4a73-895a-32a521a06b22" (UID: "94c49b3d-dce8-4a73-895a-32a521a06b22"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.478354 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94c49b3d-dce8-4a73-895a-32a521a06b22-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "94c49b3d-dce8-4a73-895a-32a521a06b22" (UID: "94c49b3d-dce8-4a73-895a-32a521a06b22"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.478538 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94c49b3d-dce8-4a73-895a-32a521a06b22-kube-api-access-s7svb" (OuterVolumeSpecName: "kube-api-access-s7svb") pod "94c49b3d-dce8-4a73-895a-32a521a06b22" (UID: "94c49b3d-dce8-4a73-895a-32a521a06b22"). InnerVolumeSpecName "kube-api-access-s7svb". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.588632 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8105d3ef-5e53-4418-9d0c-12f9b6ffa67f-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-lfp2m\" (UID: \"8105d3ef-5e53-4418-9d0c-12f9b6ffa67f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-lfp2m" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.588688 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dqtn6\" (UniqueName: \"kubernetes.io/projected/8105d3ef-5e53-4418-9d0c-12f9b6ffa67f-kube-api-access-dqtn6\") pod \"ovnkube-control-plane-97c9b6c48-lfp2m\" (UID: \"8105d3ef-5e53-4418-9d0c-12f9b6ffa67f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-lfp2m" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.588746 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8105d3ef-5e53-4418-9d0c-12f9b6ffa67f-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-lfp2m\" (UID: \"8105d3ef-5e53-4418-9d0c-12f9b6ffa67f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-lfp2m" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.588845 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8105d3ef-5e53-4418-9d0c-12f9b6ffa67f-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-lfp2m\" (UID: \"8105d3ef-5e53-4418-9d0c-12f9b6ffa67f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-lfp2m" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.588936 5118 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/94c49b3d-dce8-4a73-895a-32a521a06b22-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.588963 5118 reconciler_common.go:299] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/94c49b3d-dce8-4a73-895a-32a521a06b22-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.588982 5118 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/94c49b3d-dce8-4a73-895a-32a521a06b22-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.589051 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-s7svb\" (UniqueName: \"kubernetes.io/projected/94c49b3d-dce8-4a73-895a-32a521a06b22-kube-api-access-s7svb\") on node \"crc\" DevicePath \"\"" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.589343 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8105d3ef-5e53-4418-9d0c-12f9b6ffa67f-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-lfp2m\" (UID: \"8105d3ef-5e53-4418-9d0c-12f9b6ffa67f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-lfp2m" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.589505 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8105d3ef-5e53-4418-9d0c-12f9b6ffa67f-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-lfp2m\" (UID: \"8105d3ef-5e53-4418-9d0c-12f9b6ffa67f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-lfp2m" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.595366 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8105d3ef-5e53-4418-9d0c-12f9b6ffa67f-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-lfp2m\" (UID: \"8105d3ef-5e53-4418-9d0c-12f9b6ffa67f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-lfp2m" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.607228 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqtn6\" (UniqueName: \"kubernetes.io/projected/8105d3ef-5e53-4418-9d0c-12f9b6ffa67f-kube-api-access-dqtn6\") pod \"ovnkube-control-plane-97c9b6c48-lfp2m\" (UID: \"8105d3ef-5e53-4418-9d0c-12f9b6ffa67f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-lfp2m" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.668548 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wr4x4_cf2ef9c8-a4c9-4273-9236-43ea37f1e277/ovn-acl-logging/0.log" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.669485 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wr4x4_cf2ef9c8-a4c9-4273-9236-43ea37f1e277/ovn-controller/0.log" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.670291 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-wr4x4" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.675257 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wr4x4_cf2ef9c8-a4c9-4273-9236-43ea37f1e277/ovn-acl-logging/0.log" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.675647 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wr4x4_cf2ef9c8-a4c9-4273-9236-43ea37f1e277/ovn-controller/0.log" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.675916 5118 generic.go:358] "Generic (PLEG): container finished" podID="cf2ef9c8-a4c9-4273-9236-43ea37f1e277" containerID="9ad93d634d87a7a7587fe23f469ba02f057130c392df5c1071a94fb11c762b67" exitCode=0 Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.675938 5118 generic.go:358] "Generic (PLEG): container finished" podID="cf2ef9c8-a4c9-4273-9236-43ea37f1e277" containerID="3273cab25da83878f46c4a13c311a3c4fc6db1eebc9dad9ad1ad5373511dfee7" exitCode=0 Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.675946 5118 generic.go:358] "Generic (PLEG): container finished" podID="cf2ef9c8-a4c9-4273-9236-43ea37f1e277" containerID="2f02199ab806ead16e41b396d0822ce0742a12d090da6ddeafad84473f0df5b1" exitCode=0 Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.675954 5118 generic.go:358] "Generic (PLEG): container finished" podID="cf2ef9c8-a4c9-4273-9236-43ea37f1e277" containerID="3bff84e1b7624dd37333ecc0931e980c01f0213323f41dbfb457726159b771a7" exitCode=0 Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.675948 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wr4x4" event={"ID":"cf2ef9c8-a4c9-4273-9236-43ea37f1e277","Type":"ContainerDied","Data":"9ad93d634d87a7a7587fe23f469ba02f057130c392df5c1071a94fb11c762b67"} Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.676003 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wr4x4" event={"ID":"cf2ef9c8-a4c9-4273-9236-43ea37f1e277","Type":"ContainerDied","Data":"3273cab25da83878f46c4a13c311a3c4fc6db1eebc9dad9ad1ad5373511dfee7"} Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.676026 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wr4x4" event={"ID":"cf2ef9c8-a4c9-4273-9236-43ea37f1e277","Type":"ContainerDied","Data":"2f02199ab806ead16e41b396d0822ce0742a12d090da6ddeafad84473f0df5b1"} Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.675962 5118 generic.go:358] "Generic (PLEG): container finished" podID="cf2ef9c8-a4c9-4273-9236-43ea37f1e277" containerID="90a647427f6adcc0a95e3b5049458fc47738fe641231fc03b40ccc3c95400786" exitCode=0 Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.676046 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wr4x4" event={"ID":"cf2ef9c8-a4c9-4273-9236-43ea37f1e277","Type":"ContainerDied","Data":"3bff84e1b7624dd37333ecc0931e980c01f0213323f41dbfb457726159b771a7"} Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.676065 5118 scope.go:117] "RemoveContainer" containerID="9ad93d634d87a7a7587fe23f469ba02f057130c392df5c1071a94fb11c762b67" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.676065 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wr4x4" event={"ID":"cf2ef9c8-a4c9-4273-9236-43ea37f1e277","Type":"ContainerDied","Data":"90a647427f6adcc0a95e3b5049458fc47738fe641231fc03b40ccc3c95400786"} Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.676059 5118 generic.go:358] "Generic (PLEG): container finished" podID="cf2ef9c8-a4c9-4273-9236-43ea37f1e277" containerID="a115df047e5c59266fc13d616ad015150425f0493f706d59f2346747e8a779e4" exitCode=0 Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.676255 5118 generic.go:358] "Generic (PLEG): container finished" podID="cf2ef9c8-a4c9-4273-9236-43ea37f1e277" containerID="94a3e70f9f409de1b5a37032e8595edbddd745578a1ca5c5880941e50c94d100" exitCode=143 Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.676265 5118 generic.go:358] "Generic (PLEG): container finished" podID="cf2ef9c8-a4c9-4273-9236-43ea37f1e277" containerID="6c02426ea159bbe26cc8542b53a59e18f979d900d431361620bba4814036f665" exitCode=143 Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.676049 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-wr4x4" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.679312 5118 generic.go:358] "Generic (PLEG): container finished" podID="94c49b3d-dce8-4a73-895a-32a521a06b22" containerID="7f99e1c04090370d3c6660cfd043ab828963f2dd41095e28df8b18683736ff50" exitCode=0 Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.679339 5118 generic.go:358] "Generic (PLEG): container finished" podID="94c49b3d-dce8-4a73-895a-32a521a06b22" containerID="a5876b9a1fb17784bd002aa72e8228c02623f890ab1a65b0b7bf1f47a9b4dad1" exitCode=0 Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.679624 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-x68jp" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.682830 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wr4x4" event={"ID":"cf2ef9c8-a4c9-4273-9236-43ea37f1e277","Type":"ContainerDied","Data":"a115df047e5c59266fc13d616ad015150425f0493f706d59f2346747e8a779e4"} Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.682957 5118 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"94a3e70f9f409de1b5a37032e8595edbddd745578a1ca5c5880941e50c94d100"} Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.682987 5118 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6c02426ea159bbe26cc8542b53a59e18f979d900d431361620bba4814036f665"} Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.683007 5118 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"33d781beb8d8052347450f2300267e6ffbb6a918f9201e9b447140356c184fa3"} Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.683029 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wr4x4" event={"ID":"cf2ef9c8-a4c9-4273-9236-43ea37f1e277","Type":"ContainerDied","Data":"94a3e70f9f409de1b5a37032e8595edbddd745578a1ca5c5880941e50c94d100"} Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.683053 5118 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9ad93d634d87a7a7587fe23f469ba02f057130c392df5c1071a94fb11c762b67"} Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.683086 5118 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3273cab25da83878f46c4a13c311a3c4fc6db1eebc9dad9ad1ad5373511dfee7"} Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.683101 5118 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2f02199ab806ead16e41b396d0822ce0742a12d090da6ddeafad84473f0df5b1"} Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.683115 5118 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3bff84e1b7624dd37333ecc0931e980c01f0213323f41dbfb457726159b771a7"} Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.683130 5118 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"90a647427f6adcc0a95e3b5049458fc47738fe641231fc03b40ccc3c95400786"} Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.683142 5118 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a115df047e5c59266fc13d616ad015150425f0493f706d59f2346747e8a779e4"} Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.683155 5118 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"94a3e70f9f409de1b5a37032e8595edbddd745578a1ca5c5880941e50c94d100"} Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.683165 5118 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6c02426ea159bbe26cc8542b53a59e18f979d900d431361620bba4814036f665"} Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.683176 5118 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"33d781beb8d8052347450f2300267e6ffbb6a918f9201e9b447140356c184fa3"} Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.683193 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wr4x4" event={"ID":"cf2ef9c8-a4c9-4273-9236-43ea37f1e277","Type":"ContainerDied","Data":"6c02426ea159bbe26cc8542b53a59e18f979d900d431361620bba4814036f665"} Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.683212 5118 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9ad93d634d87a7a7587fe23f469ba02f057130c392df5c1071a94fb11c762b67"} Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.683224 5118 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3273cab25da83878f46c4a13c311a3c4fc6db1eebc9dad9ad1ad5373511dfee7"} Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.683234 5118 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2f02199ab806ead16e41b396d0822ce0742a12d090da6ddeafad84473f0df5b1"} Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.683245 5118 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3bff84e1b7624dd37333ecc0931e980c01f0213323f41dbfb457726159b771a7"} Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.683256 5118 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"90a647427f6adcc0a95e3b5049458fc47738fe641231fc03b40ccc3c95400786"} Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.683274 5118 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a115df047e5c59266fc13d616ad015150425f0493f706d59f2346747e8a779e4"} Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.683284 5118 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"94a3e70f9f409de1b5a37032e8595edbddd745578a1ca5c5880941e50c94d100"} Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.683296 5118 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6c02426ea159bbe26cc8542b53a59e18f979d900d431361620bba4814036f665"} Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.683306 5118 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"33d781beb8d8052347450f2300267e6ffbb6a918f9201e9b447140356c184fa3"} Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.683321 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wr4x4" event={"ID":"cf2ef9c8-a4c9-4273-9236-43ea37f1e277","Type":"ContainerDied","Data":"0c6086c3886b43de86d23f8567df25b9e847eae885fa03059bea7c7f7ee6ad9a"} Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.683338 5118 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9ad93d634d87a7a7587fe23f469ba02f057130c392df5c1071a94fb11c762b67"} Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.683351 5118 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3273cab25da83878f46c4a13c311a3c4fc6db1eebc9dad9ad1ad5373511dfee7"} Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.683361 5118 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2f02199ab806ead16e41b396d0822ce0742a12d090da6ddeafad84473f0df5b1"} Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.683371 5118 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3bff84e1b7624dd37333ecc0931e980c01f0213323f41dbfb457726159b771a7"} Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.683381 5118 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"90a647427f6adcc0a95e3b5049458fc47738fe641231fc03b40ccc3c95400786"} Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.683392 5118 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a115df047e5c59266fc13d616ad015150425f0493f706d59f2346747e8a779e4"} Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.683402 5118 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"94a3e70f9f409de1b5a37032e8595edbddd745578a1ca5c5880941e50c94d100"} Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.683412 5118 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6c02426ea159bbe26cc8542b53a59e18f979d900d431361620bba4814036f665"} Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.683422 5118 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"33d781beb8d8052347450f2300267e6ffbb6a918f9201e9b447140356c184fa3"} Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.683442 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-x68jp" event={"ID":"94c49b3d-dce8-4a73-895a-32a521a06b22","Type":"ContainerDied","Data":"7f99e1c04090370d3c6660cfd043ab828963f2dd41095e28df8b18683736ff50"} Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.683460 5118 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7f99e1c04090370d3c6660cfd043ab828963f2dd41095e28df8b18683736ff50"} Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.683472 5118 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a5876b9a1fb17784bd002aa72e8228c02623f890ab1a65b0b7bf1f47a9b4dad1"} Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.683487 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-x68jp" event={"ID":"94c49b3d-dce8-4a73-895a-32a521a06b22","Type":"ContainerDied","Data":"a5876b9a1fb17784bd002aa72e8228c02623f890ab1a65b0b7bf1f47a9b4dad1"} Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.683504 5118 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7f99e1c04090370d3c6660cfd043ab828963f2dd41095e28df8b18683736ff50"} Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.683516 5118 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a5876b9a1fb17784bd002aa72e8228c02623f890ab1a65b0b7bf1f47a9b4dad1"} Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.683531 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-x68jp" event={"ID":"94c49b3d-dce8-4a73-895a-32a521a06b22","Type":"ContainerDied","Data":"01292c3808477622b154cb0dbf945d6f4ebae1e270172a575c142ec764f01ed1"} Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.683548 5118 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7f99e1c04090370d3c6660cfd043ab828963f2dd41095e28df8b18683736ff50"} Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.683560 5118 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a5876b9a1fb17784bd002aa72e8228c02623f890ab1a65b0b7bf1f47a9b4dad1"} Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.700086 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-lfp2m" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.701640 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-dlvbf_a091751f-234c-43ee-8324-ebb98bb3ec36/kube-multus/0.log" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.701732 5118 generic.go:358] "Generic (PLEG): container finished" podID="a091751f-234c-43ee-8324-ebb98bb3ec36" containerID="3ec5898360e3a8a8e2d6ba11a4a74a5c238597fccf0ae0ce228c5792483aee54" exitCode=2 Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.701965 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-dlvbf" event={"ID":"a091751f-234c-43ee-8324-ebb98bb3ec36","Type":"ContainerDied","Data":"3ec5898360e3a8a8e2d6ba11a4a74a5c238597fccf0ae0ce228c5792483aee54"} Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.705120 5118 scope.go:117] "RemoveContainer" containerID="3ec5898360e3a8a8e2d6ba11a4a74a5c238597fccf0ae0ce228c5792483aee54" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.720401 5118 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.721187 5118 scope.go:117] "RemoveContainer" containerID="3273cab25da83878f46c4a13c311a3c4fc6db1eebc9dad9ad1ad5373511dfee7" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.725441 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-gpg4k"] Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.726111 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="cf2ef9c8-a4c9-4273-9236-43ea37f1e277" containerName="kube-rbac-proxy-node" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.726137 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf2ef9c8-a4c9-4273-9236-43ea37f1e277" containerName="kube-rbac-proxy-node" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.726181 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="cf2ef9c8-a4c9-4273-9236-43ea37f1e277" containerName="sbdb" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.726190 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf2ef9c8-a4c9-4273-9236-43ea37f1e277" containerName="sbdb" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.726206 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="cf2ef9c8-a4c9-4273-9236-43ea37f1e277" containerName="kube-rbac-proxy-ovn-metrics" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.726216 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf2ef9c8-a4c9-4273-9236-43ea37f1e277" containerName="kube-rbac-proxy-ovn-metrics" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.726234 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="cf2ef9c8-a4c9-4273-9236-43ea37f1e277" containerName="ovn-controller" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.726241 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf2ef9c8-a4c9-4273-9236-43ea37f1e277" containerName="ovn-controller" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.726251 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="cf2ef9c8-a4c9-4273-9236-43ea37f1e277" containerName="ovn-acl-logging" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.726258 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf2ef9c8-a4c9-4273-9236-43ea37f1e277" containerName="ovn-acl-logging" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.726266 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="cf2ef9c8-a4c9-4273-9236-43ea37f1e277" containerName="nbdb" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.726274 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf2ef9c8-a4c9-4273-9236-43ea37f1e277" containerName="nbdb" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.726285 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="cf2ef9c8-a4c9-4273-9236-43ea37f1e277" containerName="northd" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.726292 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf2ef9c8-a4c9-4273-9236-43ea37f1e277" containerName="northd" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.726305 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="cf2ef9c8-a4c9-4273-9236-43ea37f1e277" containerName="ovnkube-controller" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.726313 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf2ef9c8-a4c9-4273-9236-43ea37f1e277" containerName="ovnkube-controller" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.726321 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="cf2ef9c8-a4c9-4273-9236-43ea37f1e277" containerName="kubecfg-setup" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.726329 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf2ef9c8-a4c9-4273-9236-43ea37f1e277" containerName="kubecfg-setup" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.726428 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="cf2ef9c8-a4c9-4273-9236-43ea37f1e277" containerName="kube-rbac-proxy-ovn-metrics" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.726441 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="cf2ef9c8-a4c9-4273-9236-43ea37f1e277" containerName="nbdb" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.726453 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="cf2ef9c8-a4c9-4273-9236-43ea37f1e277" containerName="ovnkube-controller" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.726466 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="cf2ef9c8-a4c9-4273-9236-43ea37f1e277" containerName="sbdb" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.726476 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="cf2ef9c8-a4c9-4273-9236-43ea37f1e277" containerName="kube-rbac-proxy-node" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.726486 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="cf2ef9c8-a4c9-4273-9236-43ea37f1e277" containerName="northd" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.726496 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="cf2ef9c8-a4c9-4273-9236-43ea37f1e277" containerName="ovn-acl-logging" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.726507 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="cf2ef9c8-a4c9-4273-9236-43ea37f1e277" containerName="ovn-controller" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.744775 5118 scope.go:117] "RemoveContainer" containerID="2f02199ab806ead16e41b396d0822ce0742a12d090da6ddeafad84473f0df5b1" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.746645 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-gpg4k" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.752537 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-x68jp"] Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.756844 5118 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-x68jp"] Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.778015 5118 scope.go:117] "RemoveContainer" containerID="3bff84e1b7624dd37333ecc0931e980c01f0213323f41dbfb457726159b771a7" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.792810 5118 scope.go:117] "RemoveContainer" containerID="90a647427f6adcc0a95e3b5049458fc47738fe641231fc03b40ccc3c95400786" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.795529 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/cf2ef9c8-a4c9-4273-9236-43ea37f1e277-ovnkube-script-lib\") pod \"cf2ef9c8-a4c9-4273-9236-43ea37f1e277\" (UID: \"cf2ef9c8-a4c9-4273-9236-43ea37f1e277\") " Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.795606 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cf2ef9c8-a4c9-4273-9236-43ea37f1e277-var-lib-openvswitch\") pod \"cf2ef9c8-a4c9-4273-9236-43ea37f1e277\" (UID: \"cf2ef9c8-a4c9-4273-9236-43ea37f1e277\") " Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.795638 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cf2ef9c8-a4c9-4273-9236-43ea37f1e277-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "cf2ef9c8-a4c9-4273-9236-43ea37f1e277" (UID: "cf2ef9c8-a4c9-4273-9236-43ea37f1e277"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.795803 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/cf2ef9c8-a4c9-4273-9236-43ea37f1e277-node-log\") pod \"cf2ef9c8-a4c9-4273-9236-43ea37f1e277\" (UID: \"cf2ef9c8-a4c9-4273-9236-43ea37f1e277\") " Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.795887 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cf2ef9c8-a4c9-4273-9236-43ea37f1e277-node-log" (OuterVolumeSpecName: "node-log") pod "cf2ef9c8-a4c9-4273-9236-43ea37f1e277" (UID: "cf2ef9c8-a4c9-4273-9236-43ea37f1e277"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.795842 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/cf2ef9c8-a4c9-4273-9236-43ea37f1e277-systemd-units\") pod \"cf2ef9c8-a4c9-4273-9236-43ea37f1e277\" (UID: \"cf2ef9c8-a4c9-4273-9236-43ea37f1e277\") " Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.795946 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cf2ef9c8-a4c9-4273-9236-43ea37f1e277-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "cf2ef9c8-a4c9-4273-9236-43ea37f1e277" (UID: "cf2ef9c8-a4c9-4273-9236-43ea37f1e277"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.796029 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cf2ef9c8-a4c9-4273-9236-43ea37f1e277-host-run-ovn-kubernetes\") pod \"cf2ef9c8-a4c9-4273-9236-43ea37f1e277\" (UID: \"cf2ef9c8-a4c9-4273-9236-43ea37f1e277\") " Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.796049 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cf2ef9c8-a4c9-4273-9236-43ea37f1e277-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "cf2ef9c8-a4c9-4273-9236-43ea37f1e277" (UID: "cf2ef9c8-a4c9-4273-9236-43ea37f1e277"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.796181 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf2ef9c8-a4c9-4273-9236-43ea37f1e277-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "cf2ef9c8-a4c9-4273-9236-43ea37f1e277" (UID: "cf2ef9c8-a4c9-4273-9236-43ea37f1e277"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.796201 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cf2ef9c8-a4c9-4273-9236-43ea37f1e277-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "cf2ef9c8-a4c9-4273-9236-43ea37f1e277" (UID: "cf2ef9c8-a4c9-4273-9236-43ea37f1e277"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.796187 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cf2ef9c8-a4c9-4273-9236-43ea37f1e277-host-run-netns\") pod \"cf2ef9c8-a4c9-4273-9236-43ea37f1e277\" (UID: \"cf2ef9c8-a4c9-4273-9236-43ea37f1e277\") " Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.796255 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cf2ef9c8-a4c9-4273-9236-43ea37f1e277-ovnkube-config\") pod \"cf2ef9c8-a4c9-4273-9236-43ea37f1e277\" (UID: \"cf2ef9c8-a4c9-4273-9236-43ea37f1e277\") " Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.796279 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2ftnr\" (UniqueName: \"kubernetes.io/projected/cf2ef9c8-a4c9-4273-9236-43ea37f1e277-kube-api-access-2ftnr\") pod \"cf2ef9c8-a4c9-4273-9236-43ea37f1e277\" (UID: \"cf2ef9c8-a4c9-4273-9236-43ea37f1e277\") " Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.796310 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/cf2ef9c8-a4c9-4273-9236-43ea37f1e277-log-socket\") pod \"cf2ef9c8-a4c9-4273-9236-43ea37f1e277\" (UID: \"cf2ef9c8-a4c9-4273-9236-43ea37f1e277\") " Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.796347 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cf2ef9c8-a4c9-4273-9236-43ea37f1e277-log-socket" (OuterVolumeSpecName: "log-socket") pod "cf2ef9c8-a4c9-4273-9236-43ea37f1e277" (UID: "cf2ef9c8-a4c9-4273-9236-43ea37f1e277"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.796465 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cf2ef9c8-a4c9-4273-9236-43ea37f1e277-env-overrides\") pod \"cf2ef9c8-a4c9-4273-9236-43ea37f1e277\" (UID: \"cf2ef9c8-a4c9-4273-9236-43ea37f1e277\") " Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.796500 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/cf2ef9c8-a4c9-4273-9236-43ea37f1e277-host-kubelet\") pod \"cf2ef9c8-a4c9-4273-9236-43ea37f1e277\" (UID: \"cf2ef9c8-a4c9-4273-9236-43ea37f1e277\") " Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.796537 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/cf2ef9c8-a4c9-4273-9236-43ea37f1e277-run-systemd\") pod \"cf2ef9c8-a4c9-4273-9236-43ea37f1e277\" (UID: \"cf2ef9c8-a4c9-4273-9236-43ea37f1e277\") " Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.796554 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cf2ef9c8-a4c9-4273-9236-43ea37f1e277-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "cf2ef9c8-a4c9-4273-9236-43ea37f1e277" (UID: "cf2ef9c8-a4c9-4273-9236-43ea37f1e277"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.796571 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cf2ef9c8-a4c9-4273-9236-43ea37f1e277-host-cni-netd\") pod \"cf2ef9c8-a4c9-4273-9236-43ea37f1e277\" (UID: \"cf2ef9c8-a4c9-4273-9236-43ea37f1e277\") " Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.796599 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/cf2ef9c8-a4c9-4273-9236-43ea37f1e277-run-ovn\") pod \"cf2ef9c8-a4c9-4273-9236-43ea37f1e277\" (UID: \"cf2ef9c8-a4c9-4273-9236-43ea37f1e277\") " Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.796619 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cf2ef9c8-a4c9-4273-9236-43ea37f1e277-etc-openvswitch\") pod \"cf2ef9c8-a4c9-4273-9236-43ea37f1e277\" (UID: \"cf2ef9c8-a4c9-4273-9236-43ea37f1e277\") " Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.796652 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cf2ef9c8-a4c9-4273-9236-43ea37f1e277-host-cni-bin\") pod \"cf2ef9c8-a4c9-4273-9236-43ea37f1e277\" (UID: \"cf2ef9c8-a4c9-4273-9236-43ea37f1e277\") " Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.796656 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cf2ef9c8-a4c9-4273-9236-43ea37f1e277-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "cf2ef9c8-a4c9-4273-9236-43ea37f1e277" (UID: "cf2ef9c8-a4c9-4273-9236-43ea37f1e277"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.796684 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cf2ef9c8-a4c9-4273-9236-43ea37f1e277-ovn-node-metrics-cert\") pod \"cf2ef9c8-a4c9-4273-9236-43ea37f1e277\" (UID: \"cf2ef9c8-a4c9-4273-9236-43ea37f1e277\") " Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.796690 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cf2ef9c8-a4c9-4273-9236-43ea37f1e277-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "cf2ef9c8-a4c9-4273-9236-43ea37f1e277" (UID: "cf2ef9c8-a4c9-4273-9236-43ea37f1e277"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.796711 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cf2ef9c8-a4c9-4273-9236-43ea37f1e277-run-openvswitch\") pod \"cf2ef9c8-a4c9-4273-9236-43ea37f1e277\" (UID: \"cf2ef9c8-a4c9-4273-9236-43ea37f1e277\") " Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.796703 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cf2ef9c8-a4c9-4273-9236-43ea37f1e277-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "cf2ef9c8-a4c9-4273-9236-43ea37f1e277" (UID: "cf2ef9c8-a4c9-4273-9236-43ea37f1e277"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.796699 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cf2ef9c8-a4c9-4273-9236-43ea37f1e277-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "cf2ef9c8-a4c9-4273-9236-43ea37f1e277" (UID: "cf2ef9c8-a4c9-4273-9236-43ea37f1e277"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.796738 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cf2ef9c8-a4c9-4273-9236-43ea37f1e277-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "cf2ef9c8-a4c9-4273-9236-43ea37f1e277" (UID: "cf2ef9c8-a4c9-4273-9236-43ea37f1e277"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.796767 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cf2ef9c8-a4c9-4273-9236-43ea37f1e277-host-var-lib-cni-networks-ovn-kubernetes\") pod \"cf2ef9c8-a4c9-4273-9236-43ea37f1e277\" (UID: \"cf2ef9c8-a4c9-4273-9236-43ea37f1e277\") " Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.796804 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/cf2ef9c8-a4c9-4273-9236-43ea37f1e277-host-slash\") pod \"cf2ef9c8-a4c9-4273-9236-43ea37f1e277\" (UID: \"cf2ef9c8-a4c9-4273-9236-43ea37f1e277\") " Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.796831 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf2ef9c8-a4c9-4273-9236-43ea37f1e277-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "cf2ef9c8-a4c9-4273-9236-43ea37f1e277" (UID: "cf2ef9c8-a4c9-4273-9236-43ea37f1e277"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.796857 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cf2ef9c8-a4c9-4273-9236-43ea37f1e277-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "cf2ef9c8-a4c9-4273-9236-43ea37f1e277" (UID: "cf2ef9c8-a4c9-4273-9236-43ea37f1e277"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.796958 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/abafac28-99fe-42a6-bee9-f3fb197b1bc2-host-run-netns\") pod \"ovnkube-node-gpg4k\" (UID: \"abafac28-99fe-42a6-bee9-f3fb197b1bc2\") " pod="openshift-ovn-kubernetes/ovnkube-node-gpg4k" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.796968 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf2ef9c8-a4c9-4273-9236-43ea37f1e277-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "cf2ef9c8-a4c9-4273-9236-43ea37f1e277" (UID: "cf2ef9c8-a4c9-4273-9236-43ea37f1e277"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.796979 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cf2ef9c8-a4c9-4273-9236-43ea37f1e277-host-slash" (OuterVolumeSpecName: "host-slash") pod "cf2ef9c8-a4c9-4273-9236-43ea37f1e277" (UID: "cf2ef9c8-a4c9-4273-9236-43ea37f1e277"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.796991 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/abafac28-99fe-42a6-bee9-f3fb197b1bc2-ovn-node-metrics-cert\") pod \"ovnkube-node-gpg4k\" (UID: \"abafac28-99fe-42a6-bee9-f3fb197b1bc2\") " pod="openshift-ovn-kubernetes/ovnkube-node-gpg4k" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.797016 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/abafac28-99fe-42a6-bee9-f3fb197b1bc2-ovnkube-script-lib\") pod \"ovnkube-node-gpg4k\" (UID: \"abafac28-99fe-42a6-bee9-f3fb197b1bc2\") " pod="openshift-ovn-kubernetes/ovnkube-node-gpg4k" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.797048 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/abafac28-99fe-42a6-bee9-f3fb197b1bc2-host-run-ovn-kubernetes\") pod \"ovnkube-node-gpg4k\" (UID: \"abafac28-99fe-42a6-bee9-f3fb197b1bc2\") " pod="openshift-ovn-kubernetes/ovnkube-node-gpg4k" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.797154 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/abafac28-99fe-42a6-bee9-f3fb197b1bc2-host-kubelet\") pod \"ovnkube-node-gpg4k\" (UID: \"abafac28-99fe-42a6-bee9-f3fb197b1bc2\") " pod="openshift-ovn-kubernetes/ovnkube-node-gpg4k" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.797190 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/abafac28-99fe-42a6-bee9-f3fb197b1bc2-node-log\") pod \"ovnkube-node-gpg4k\" (UID: \"abafac28-99fe-42a6-bee9-f3fb197b1bc2\") " pod="openshift-ovn-kubernetes/ovnkube-node-gpg4k" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.797232 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/abafac28-99fe-42a6-bee9-f3fb197b1bc2-run-ovn\") pod \"ovnkube-node-gpg4k\" (UID: \"abafac28-99fe-42a6-bee9-f3fb197b1bc2\") " pod="openshift-ovn-kubernetes/ovnkube-node-gpg4k" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.797260 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/abafac28-99fe-42a6-bee9-f3fb197b1bc2-host-cni-bin\") pod \"ovnkube-node-gpg4k\" (UID: \"abafac28-99fe-42a6-bee9-f3fb197b1bc2\") " pod="openshift-ovn-kubernetes/ovnkube-node-gpg4k" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.797303 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/abafac28-99fe-42a6-bee9-f3fb197b1bc2-etc-openvswitch\") pod \"ovnkube-node-gpg4k\" (UID: \"abafac28-99fe-42a6-bee9-f3fb197b1bc2\") " pod="openshift-ovn-kubernetes/ovnkube-node-gpg4k" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.797334 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/abafac28-99fe-42a6-bee9-f3fb197b1bc2-var-lib-openvswitch\") pod \"ovnkube-node-gpg4k\" (UID: \"abafac28-99fe-42a6-bee9-f3fb197b1bc2\") " pod="openshift-ovn-kubernetes/ovnkube-node-gpg4k" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.797355 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/abafac28-99fe-42a6-bee9-f3fb197b1bc2-systemd-units\") pod \"ovnkube-node-gpg4k\" (UID: \"abafac28-99fe-42a6-bee9-f3fb197b1bc2\") " pod="openshift-ovn-kubernetes/ovnkube-node-gpg4k" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.797398 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7zw8\" (UniqueName: \"kubernetes.io/projected/abafac28-99fe-42a6-bee9-f3fb197b1bc2-kube-api-access-v7zw8\") pod \"ovnkube-node-gpg4k\" (UID: \"abafac28-99fe-42a6-bee9-f3fb197b1bc2\") " pod="openshift-ovn-kubernetes/ovnkube-node-gpg4k" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.797418 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/abafac28-99fe-42a6-bee9-f3fb197b1bc2-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-gpg4k\" (UID: \"abafac28-99fe-42a6-bee9-f3fb197b1bc2\") " pod="openshift-ovn-kubernetes/ovnkube-node-gpg4k" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.797451 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/abafac28-99fe-42a6-bee9-f3fb197b1bc2-env-overrides\") pod \"ovnkube-node-gpg4k\" (UID: \"abafac28-99fe-42a6-bee9-f3fb197b1bc2\") " pod="openshift-ovn-kubernetes/ovnkube-node-gpg4k" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.797479 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/abafac28-99fe-42a6-bee9-f3fb197b1bc2-host-cni-netd\") pod \"ovnkube-node-gpg4k\" (UID: \"abafac28-99fe-42a6-bee9-f3fb197b1bc2\") " pod="openshift-ovn-kubernetes/ovnkube-node-gpg4k" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.797499 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/abafac28-99fe-42a6-bee9-f3fb197b1bc2-ovnkube-config\") pod \"ovnkube-node-gpg4k\" (UID: \"abafac28-99fe-42a6-bee9-f3fb197b1bc2\") " pod="openshift-ovn-kubernetes/ovnkube-node-gpg4k" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.797560 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/abafac28-99fe-42a6-bee9-f3fb197b1bc2-run-systemd\") pod \"ovnkube-node-gpg4k\" (UID: \"abafac28-99fe-42a6-bee9-f3fb197b1bc2\") " pod="openshift-ovn-kubernetes/ovnkube-node-gpg4k" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.797592 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/abafac28-99fe-42a6-bee9-f3fb197b1bc2-log-socket\") pod \"ovnkube-node-gpg4k\" (UID: \"abafac28-99fe-42a6-bee9-f3fb197b1bc2\") " pod="openshift-ovn-kubernetes/ovnkube-node-gpg4k" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.797617 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/abafac28-99fe-42a6-bee9-f3fb197b1bc2-host-slash\") pod \"ovnkube-node-gpg4k\" (UID: \"abafac28-99fe-42a6-bee9-f3fb197b1bc2\") " pod="openshift-ovn-kubernetes/ovnkube-node-gpg4k" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.797637 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/abafac28-99fe-42a6-bee9-f3fb197b1bc2-run-openvswitch\") pod \"ovnkube-node-gpg4k\" (UID: \"abafac28-99fe-42a6-bee9-f3fb197b1bc2\") " pod="openshift-ovn-kubernetes/ovnkube-node-gpg4k" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.797718 5118 reconciler_common.go:299] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/cf2ef9c8-a4c9-4273-9236-43ea37f1e277-systemd-units\") on node \"crc\" DevicePath \"\"" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.797738 5118 reconciler_common.go:299] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cf2ef9c8-a4c9-4273-9236-43ea37f1e277-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.797753 5118 reconciler_common.go:299] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cf2ef9c8-a4c9-4273-9236-43ea37f1e277-host-run-netns\") on node \"crc\" DevicePath \"\"" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.797765 5118 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cf2ef9c8-a4c9-4273-9236-43ea37f1e277-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.797777 5118 reconciler_common.go:299] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/cf2ef9c8-a4c9-4273-9236-43ea37f1e277-log-socket\") on node \"crc\" DevicePath \"\"" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.797788 5118 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cf2ef9c8-a4c9-4273-9236-43ea37f1e277-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.797798 5118 reconciler_common.go:299] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/cf2ef9c8-a4c9-4273-9236-43ea37f1e277-host-kubelet\") on node \"crc\" DevicePath \"\"" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.797811 5118 reconciler_common.go:299] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cf2ef9c8-a4c9-4273-9236-43ea37f1e277-host-cni-netd\") on node \"crc\" DevicePath \"\"" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.797824 5118 reconciler_common.go:299] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/cf2ef9c8-a4c9-4273-9236-43ea37f1e277-run-ovn\") on node \"crc\" DevicePath \"\"" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.797836 5118 reconciler_common.go:299] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cf2ef9c8-a4c9-4273-9236-43ea37f1e277-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.797848 5118 reconciler_common.go:299] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cf2ef9c8-a4c9-4273-9236-43ea37f1e277-host-cni-bin\") on node \"crc\" DevicePath \"\"" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.797859 5118 reconciler_common.go:299] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cf2ef9c8-a4c9-4273-9236-43ea37f1e277-run-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.797889 5118 reconciler_common.go:299] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cf2ef9c8-a4c9-4273-9236-43ea37f1e277-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.797903 5118 reconciler_common.go:299] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/cf2ef9c8-a4c9-4273-9236-43ea37f1e277-host-slash\") on node \"crc\" DevicePath \"\"" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.797915 5118 reconciler_common.go:299] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/cf2ef9c8-a4c9-4273-9236-43ea37f1e277-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.797927 5118 reconciler_common.go:299] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cf2ef9c8-a4c9-4273-9236-43ea37f1e277-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.797939 5118 reconciler_common.go:299] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/cf2ef9c8-a4c9-4273-9236-43ea37f1e277-node-log\") on node \"crc\" DevicePath \"\"" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.800691 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf2ef9c8-a4c9-4273-9236-43ea37f1e277-kube-api-access-2ftnr" (OuterVolumeSpecName: "kube-api-access-2ftnr") pod "cf2ef9c8-a4c9-4273-9236-43ea37f1e277" (UID: "cf2ef9c8-a4c9-4273-9236-43ea37f1e277"). InnerVolumeSpecName "kube-api-access-2ftnr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.800716 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf2ef9c8-a4c9-4273-9236-43ea37f1e277-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "cf2ef9c8-a4c9-4273-9236-43ea37f1e277" (UID: "cf2ef9c8-a4c9-4273-9236-43ea37f1e277"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.808207 5118 scope.go:117] "RemoveContainer" containerID="a115df047e5c59266fc13d616ad015150425f0493f706d59f2346747e8a779e4" Dec 08 17:53:41 crc kubenswrapper[5118]: E1208 17:53:41.808711 5118 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod94c49b3d_dce8_4a73_895a_32a521a06b22.slice/crio-01292c3808477622b154cb0dbf945d6f4ebae1e270172a575c142ec764f01ed1\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod94c49b3d_dce8_4a73_895a_32a521a06b22.slice\": RecentStats: unable to find data in memory cache]" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.812580 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cf2ef9c8-a4c9-4273-9236-43ea37f1e277-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "cf2ef9c8-a4c9-4273-9236-43ea37f1e277" (UID: "cf2ef9c8-a4c9-4273-9236-43ea37f1e277"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.823016 5118 scope.go:117] "RemoveContainer" containerID="94a3e70f9f409de1b5a37032e8595edbddd745578a1ca5c5880941e50c94d100" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.837808 5118 scope.go:117] "RemoveContainer" containerID="6c02426ea159bbe26cc8542b53a59e18f979d900d431361620bba4814036f665" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.849979 5118 scope.go:117] "RemoveContainer" containerID="33d781beb8d8052347450f2300267e6ffbb6a918f9201e9b447140356c184fa3" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.864578 5118 scope.go:117] "RemoveContainer" containerID="9ad93d634d87a7a7587fe23f469ba02f057130c392df5c1071a94fb11c762b67" Dec 08 17:53:41 crc kubenswrapper[5118]: E1208 17:53:41.865586 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ad93d634d87a7a7587fe23f469ba02f057130c392df5c1071a94fb11c762b67\": container with ID starting with 9ad93d634d87a7a7587fe23f469ba02f057130c392df5c1071a94fb11c762b67 not found: ID does not exist" containerID="9ad93d634d87a7a7587fe23f469ba02f057130c392df5c1071a94fb11c762b67" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.865643 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ad93d634d87a7a7587fe23f469ba02f057130c392df5c1071a94fb11c762b67"} err="failed to get container status \"9ad93d634d87a7a7587fe23f469ba02f057130c392df5c1071a94fb11c762b67\": rpc error: code = NotFound desc = could not find container \"9ad93d634d87a7a7587fe23f469ba02f057130c392df5c1071a94fb11c762b67\": container with ID starting with 9ad93d634d87a7a7587fe23f469ba02f057130c392df5c1071a94fb11c762b67 not found: ID does not exist" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.865681 5118 scope.go:117] "RemoveContainer" containerID="3273cab25da83878f46c4a13c311a3c4fc6db1eebc9dad9ad1ad5373511dfee7" Dec 08 17:53:41 crc kubenswrapper[5118]: E1208 17:53:41.866178 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3273cab25da83878f46c4a13c311a3c4fc6db1eebc9dad9ad1ad5373511dfee7\": container with ID starting with 3273cab25da83878f46c4a13c311a3c4fc6db1eebc9dad9ad1ad5373511dfee7 not found: ID does not exist" containerID="3273cab25da83878f46c4a13c311a3c4fc6db1eebc9dad9ad1ad5373511dfee7" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.866229 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3273cab25da83878f46c4a13c311a3c4fc6db1eebc9dad9ad1ad5373511dfee7"} err="failed to get container status \"3273cab25da83878f46c4a13c311a3c4fc6db1eebc9dad9ad1ad5373511dfee7\": rpc error: code = NotFound desc = could not find container \"3273cab25da83878f46c4a13c311a3c4fc6db1eebc9dad9ad1ad5373511dfee7\": container with ID starting with 3273cab25da83878f46c4a13c311a3c4fc6db1eebc9dad9ad1ad5373511dfee7 not found: ID does not exist" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.866254 5118 scope.go:117] "RemoveContainer" containerID="2f02199ab806ead16e41b396d0822ce0742a12d090da6ddeafad84473f0df5b1" Dec 08 17:53:41 crc kubenswrapper[5118]: E1208 17:53:41.866699 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f02199ab806ead16e41b396d0822ce0742a12d090da6ddeafad84473f0df5b1\": container with ID starting with 2f02199ab806ead16e41b396d0822ce0742a12d090da6ddeafad84473f0df5b1 not found: ID does not exist" containerID="2f02199ab806ead16e41b396d0822ce0742a12d090da6ddeafad84473f0df5b1" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.866731 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f02199ab806ead16e41b396d0822ce0742a12d090da6ddeafad84473f0df5b1"} err="failed to get container status \"2f02199ab806ead16e41b396d0822ce0742a12d090da6ddeafad84473f0df5b1\": rpc error: code = NotFound desc = could not find container \"2f02199ab806ead16e41b396d0822ce0742a12d090da6ddeafad84473f0df5b1\": container with ID starting with 2f02199ab806ead16e41b396d0822ce0742a12d090da6ddeafad84473f0df5b1 not found: ID does not exist" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.866749 5118 scope.go:117] "RemoveContainer" containerID="3bff84e1b7624dd37333ecc0931e980c01f0213323f41dbfb457726159b771a7" Dec 08 17:53:41 crc kubenswrapper[5118]: E1208 17:53:41.867133 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3bff84e1b7624dd37333ecc0931e980c01f0213323f41dbfb457726159b771a7\": container with ID starting with 3bff84e1b7624dd37333ecc0931e980c01f0213323f41dbfb457726159b771a7 not found: ID does not exist" containerID="3bff84e1b7624dd37333ecc0931e980c01f0213323f41dbfb457726159b771a7" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.867163 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3bff84e1b7624dd37333ecc0931e980c01f0213323f41dbfb457726159b771a7"} err="failed to get container status \"3bff84e1b7624dd37333ecc0931e980c01f0213323f41dbfb457726159b771a7\": rpc error: code = NotFound desc = could not find container \"3bff84e1b7624dd37333ecc0931e980c01f0213323f41dbfb457726159b771a7\": container with ID starting with 3bff84e1b7624dd37333ecc0931e980c01f0213323f41dbfb457726159b771a7 not found: ID does not exist" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.867181 5118 scope.go:117] "RemoveContainer" containerID="90a647427f6adcc0a95e3b5049458fc47738fe641231fc03b40ccc3c95400786" Dec 08 17:53:41 crc kubenswrapper[5118]: E1208 17:53:41.867476 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"90a647427f6adcc0a95e3b5049458fc47738fe641231fc03b40ccc3c95400786\": container with ID starting with 90a647427f6adcc0a95e3b5049458fc47738fe641231fc03b40ccc3c95400786 not found: ID does not exist" containerID="90a647427f6adcc0a95e3b5049458fc47738fe641231fc03b40ccc3c95400786" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.867514 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90a647427f6adcc0a95e3b5049458fc47738fe641231fc03b40ccc3c95400786"} err="failed to get container status \"90a647427f6adcc0a95e3b5049458fc47738fe641231fc03b40ccc3c95400786\": rpc error: code = NotFound desc = could not find container \"90a647427f6adcc0a95e3b5049458fc47738fe641231fc03b40ccc3c95400786\": container with ID starting with 90a647427f6adcc0a95e3b5049458fc47738fe641231fc03b40ccc3c95400786 not found: ID does not exist" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.867530 5118 scope.go:117] "RemoveContainer" containerID="a115df047e5c59266fc13d616ad015150425f0493f706d59f2346747e8a779e4" Dec 08 17:53:41 crc kubenswrapper[5118]: E1208 17:53:41.867809 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a115df047e5c59266fc13d616ad015150425f0493f706d59f2346747e8a779e4\": container with ID starting with a115df047e5c59266fc13d616ad015150425f0493f706d59f2346747e8a779e4 not found: ID does not exist" containerID="a115df047e5c59266fc13d616ad015150425f0493f706d59f2346747e8a779e4" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.867840 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a115df047e5c59266fc13d616ad015150425f0493f706d59f2346747e8a779e4"} err="failed to get container status \"a115df047e5c59266fc13d616ad015150425f0493f706d59f2346747e8a779e4\": rpc error: code = NotFound desc = could not find container \"a115df047e5c59266fc13d616ad015150425f0493f706d59f2346747e8a779e4\": container with ID starting with a115df047e5c59266fc13d616ad015150425f0493f706d59f2346747e8a779e4 not found: ID does not exist" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.867858 5118 scope.go:117] "RemoveContainer" containerID="94a3e70f9f409de1b5a37032e8595edbddd745578a1ca5c5880941e50c94d100" Dec 08 17:53:41 crc kubenswrapper[5118]: E1208 17:53:41.868140 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"94a3e70f9f409de1b5a37032e8595edbddd745578a1ca5c5880941e50c94d100\": container with ID starting with 94a3e70f9f409de1b5a37032e8595edbddd745578a1ca5c5880941e50c94d100 not found: ID does not exist" containerID="94a3e70f9f409de1b5a37032e8595edbddd745578a1ca5c5880941e50c94d100" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.868181 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94a3e70f9f409de1b5a37032e8595edbddd745578a1ca5c5880941e50c94d100"} err="failed to get container status \"94a3e70f9f409de1b5a37032e8595edbddd745578a1ca5c5880941e50c94d100\": rpc error: code = NotFound desc = could not find container \"94a3e70f9f409de1b5a37032e8595edbddd745578a1ca5c5880941e50c94d100\": container with ID starting with 94a3e70f9f409de1b5a37032e8595edbddd745578a1ca5c5880941e50c94d100 not found: ID does not exist" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.868197 5118 scope.go:117] "RemoveContainer" containerID="6c02426ea159bbe26cc8542b53a59e18f979d900d431361620bba4814036f665" Dec 08 17:53:41 crc kubenswrapper[5118]: E1208 17:53:41.868541 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c02426ea159bbe26cc8542b53a59e18f979d900d431361620bba4814036f665\": container with ID starting with 6c02426ea159bbe26cc8542b53a59e18f979d900d431361620bba4814036f665 not found: ID does not exist" containerID="6c02426ea159bbe26cc8542b53a59e18f979d900d431361620bba4814036f665" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.868572 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c02426ea159bbe26cc8542b53a59e18f979d900d431361620bba4814036f665"} err="failed to get container status \"6c02426ea159bbe26cc8542b53a59e18f979d900d431361620bba4814036f665\": rpc error: code = NotFound desc = could not find container \"6c02426ea159bbe26cc8542b53a59e18f979d900d431361620bba4814036f665\": container with ID starting with 6c02426ea159bbe26cc8542b53a59e18f979d900d431361620bba4814036f665 not found: ID does not exist" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.868593 5118 scope.go:117] "RemoveContainer" containerID="33d781beb8d8052347450f2300267e6ffbb6a918f9201e9b447140356c184fa3" Dec 08 17:53:41 crc kubenswrapper[5118]: E1208 17:53:41.868837 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33d781beb8d8052347450f2300267e6ffbb6a918f9201e9b447140356c184fa3\": container with ID starting with 33d781beb8d8052347450f2300267e6ffbb6a918f9201e9b447140356c184fa3 not found: ID does not exist" containerID="33d781beb8d8052347450f2300267e6ffbb6a918f9201e9b447140356c184fa3" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.868897 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33d781beb8d8052347450f2300267e6ffbb6a918f9201e9b447140356c184fa3"} err="failed to get container status \"33d781beb8d8052347450f2300267e6ffbb6a918f9201e9b447140356c184fa3\": rpc error: code = NotFound desc = could not find container \"33d781beb8d8052347450f2300267e6ffbb6a918f9201e9b447140356c184fa3\": container with ID starting with 33d781beb8d8052347450f2300267e6ffbb6a918f9201e9b447140356c184fa3 not found: ID does not exist" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.868930 5118 scope.go:117] "RemoveContainer" containerID="9ad93d634d87a7a7587fe23f469ba02f057130c392df5c1071a94fb11c762b67" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.869199 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ad93d634d87a7a7587fe23f469ba02f057130c392df5c1071a94fb11c762b67"} err="failed to get container status \"9ad93d634d87a7a7587fe23f469ba02f057130c392df5c1071a94fb11c762b67\": rpc error: code = NotFound desc = could not find container \"9ad93d634d87a7a7587fe23f469ba02f057130c392df5c1071a94fb11c762b67\": container with ID starting with 9ad93d634d87a7a7587fe23f469ba02f057130c392df5c1071a94fb11c762b67 not found: ID does not exist" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.869223 5118 scope.go:117] "RemoveContainer" containerID="3273cab25da83878f46c4a13c311a3c4fc6db1eebc9dad9ad1ad5373511dfee7" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.869500 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3273cab25da83878f46c4a13c311a3c4fc6db1eebc9dad9ad1ad5373511dfee7"} err="failed to get container status \"3273cab25da83878f46c4a13c311a3c4fc6db1eebc9dad9ad1ad5373511dfee7\": rpc error: code = NotFound desc = could not find container \"3273cab25da83878f46c4a13c311a3c4fc6db1eebc9dad9ad1ad5373511dfee7\": container with ID starting with 3273cab25da83878f46c4a13c311a3c4fc6db1eebc9dad9ad1ad5373511dfee7 not found: ID does not exist" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.869520 5118 scope.go:117] "RemoveContainer" containerID="2f02199ab806ead16e41b396d0822ce0742a12d090da6ddeafad84473f0df5b1" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.869659 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f02199ab806ead16e41b396d0822ce0742a12d090da6ddeafad84473f0df5b1"} err="failed to get container status \"2f02199ab806ead16e41b396d0822ce0742a12d090da6ddeafad84473f0df5b1\": rpc error: code = NotFound desc = could not find container \"2f02199ab806ead16e41b396d0822ce0742a12d090da6ddeafad84473f0df5b1\": container with ID starting with 2f02199ab806ead16e41b396d0822ce0742a12d090da6ddeafad84473f0df5b1 not found: ID does not exist" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.869718 5118 scope.go:117] "RemoveContainer" containerID="3bff84e1b7624dd37333ecc0931e980c01f0213323f41dbfb457726159b771a7" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.869951 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3bff84e1b7624dd37333ecc0931e980c01f0213323f41dbfb457726159b771a7"} err="failed to get container status \"3bff84e1b7624dd37333ecc0931e980c01f0213323f41dbfb457726159b771a7\": rpc error: code = NotFound desc = could not find container \"3bff84e1b7624dd37333ecc0931e980c01f0213323f41dbfb457726159b771a7\": container with ID starting with 3bff84e1b7624dd37333ecc0931e980c01f0213323f41dbfb457726159b771a7 not found: ID does not exist" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.869981 5118 scope.go:117] "RemoveContainer" containerID="90a647427f6adcc0a95e3b5049458fc47738fe641231fc03b40ccc3c95400786" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.870227 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90a647427f6adcc0a95e3b5049458fc47738fe641231fc03b40ccc3c95400786"} err="failed to get container status \"90a647427f6adcc0a95e3b5049458fc47738fe641231fc03b40ccc3c95400786\": rpc error: code = NotFound desc = could not find container \"90a647427f6adcc0a95e3b5049458fc47738fe641231fc03b40ccc3c95400786\": container with ID starting with 90a647427f6adcc0a95e3b5049458fc47738fe641231fc03b40ccc3c95400786 not found: ID does not exist" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.870267 5118 scope.go:117] "RemoveContainer" containerID="a115df047e5c59266fc13d616ad015150425f0493f706d59f2346747e8a779e4" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.870506 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a115df047e5c59266fc13d616ad015150425f0493f706d59f2346747e8a779e4"} err="failed to get container status \"a115df047e5c59266fc13d616ad015150425f0493f706d59f2346747e8a779e4\": rpc error: code = NotFound desc = could not find container \"a115df047e5c59266fc13d616ad015150425f0493f706d59f2346747e8a779e4\": container with ID starting with a115df047e5c59266fc13d616ad015150425f0493f706d59f2346747e8a779e4 not found: ID does not exist" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.870540 5118 scope.go:117] "RemoveContainer" containerID="94a3e70f9f409de1b5a37032e8595edbddd745578a1ca5c5880941e50c94d100" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.870852 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94a3e70f9f409de1b5a37032e8595edbddd745578a1ca5c5880941e50c94d100"} err="failed to get container status \"94a3e70f9f409de1b5a37032e8595edbddd745578a1ca5c5880941e50c94d100\": rpc error: code = NotFound desc = could not find container \"94a3e70f9f409de1b5a37032e8595edbddd745578a1ca5c5880941e50c94d100\": container with ID starting with 94a3e70f9f409de1b5a37032e8595edbddd745578a1ca5c5880941e50c94d100 not found: ID does not exist" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.870889 5118 scope.go:117] "RemoveContainer" containerID="6c02426ea159bbe26cc8542b53a59e18f979d900d431361620bba4814036f665" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.871104 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c02426ea159bbe26cc8542b53a59e18f979d900d431361620bba4814036f665"} err="failed to get container status \"6c02426ea159bbe26cc8542b53a59e18f979d900d431361620bba4814036f665\": rpc error: code = NotFound desc = could not find container \"6c02426ea159bbe26cc8542b53a59e18f979d900d431361620bba4814036f665\": container with ID starting with 6c02426ea159bbe26cc8542b53a59e18f979d900d431361620bba4814036f665 not found: ID does not exist" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.871134 5118 scope.go:117] "RemoveContainer" containerID="33d781beb8d8052347450f2300267e6ffbb6a918f9201e9b447140356c184fa3" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.871355 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33d781beb8d8052347450f2300267e6ffbb6a918f9201e9b447140356c184fa3"} err="failed to get container status \"33d781beb8d8052347450f2300267e6ffbb6a918f9201e9b447140356c184fa3\": rpc error: code = NotFound desc = could not find container \"33d781beb8d8052347450f2300267e6ffbb6a918f9201e9b447140356c184fa3\": container with ID starting with 33d781beb8d8052347450f2300267e6ffbb6a918f9201e9b447140356c184fa3 not found: ID does not exist" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.871375 5118 scope.go:117] "RemoveContainer" containerID="9ad93d634d87a7a7587fe23f469ba02f057130c392df5c1071a94fb11c762b67" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.871605 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ad93d634d87a7a7587fe23f469ba02f057130c392df5c1071a94fb11c762b67"} err="failed to get container status \"9ad93d634d87a7a7587fe23f469ba02f057130c392df5c1071a94fb11c762b67\": rpc error: code = NotFound desc = could not find container \"9ad93d634d87a7a7587fe23f469ba02f057130c392df5c1071a94fb11c762b67\": container with ID starting with 9ad93d634d87a7a7587fe23f469ba02f057130c392df5c1071a94fb11c762b67 not found: ID does not exist" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.871635 5118 scope.go:117] "RemoveContainer" containerID="3273cab25da83878f46c4a13c311a3c4fc6db1eebc9dad9ad1ad5373511dfee7" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.871844 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3273cab25da83878f46c4a13c311a3c4fc6db1eebc9dad9ad1ad5373511dfee7"} err="failed to get container status \"3273cab25da83878f46c4a13c311a3c4fc6db1eebc9dad9ad1ad5373511dfee7\": rpc error: code = NotFound desc = could not find container \"3273cab25da83878f46c4a13c311a3c4fc6db1eebc9dad9ad1ad5373511dfee7\": container with ID starting with 3273cab25da83878f46c4a13c311a3c4fc6db1eebc9dad9ad1ad5373511dfee7 not found: ID does not exist" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.871865 5118 scope.go:117] "RemoveContainer" containerID="2f02199ab806ead16e41b396d0822ce0742a12d090da6ddeafad84473f0df5b1" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.872142 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f02199ab806ead16e41b396d0822ce0742a12d090da6ddeafad84473f0df5b1"} err="failed to get container status \"2f02199ab806ead16e41b396d0822ce0742a12d090da6ddeafad84473f0df5b1\": rpc error: code = NotFound desc = could not find container \"2f02199ab806ead16e41b396d0822ce0742a12d090da6ddeafad84473f0df5b1\": container with ID starting with 2f02199ab806ead16e41b396d0822ce0742a12d090da6ddeafad84473f0df5b1 not found: ID does not exist" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.872162 5118 scope.go:117] "RemoveContainer" containerID="3bff84e1b7624dd37333ecc0931e980c01f0213323f41dbfb457726159b771a7" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.872364 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3bff84e1b7624dd37333ecc0931e980c01f0213323f41dbfb457726159b771a7"} err="failed to get container status \"3bff84e1b7624dd37333ecc0931e980c01f0213323f41dbfb457726159b771a7\": rpc error: code = NotFound desc = could not find container \"3bff84e1b7624dd37333ecc0931e980c01f0213323f41dbfb457726159b771a7\": container with ID starting with 3bff84e1b7624dd37333ecc0931e980c01f0213323f41dbfb457726159b771a7 not found: ID does not exist" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.872395 5118 scope.go:117] "RemoveContainer" containerID="90a647427f6adcc0a95e3b5049458fc47738fe641231fc03b40ccc3c95400786" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.872634 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90a647427f6adcc0a95e3b5049458fc47738fe641231fc03b40ccc3c95400786"} err="failed to get container status \"90a647427f6adcc0a95e3b5049458fc47738fe641231fc03b40ccc3c95400786\": rpc error: code = NotFound desc = could not find container \"90a647427f6adcc0a95e3b5049458fc47738fe641231fc03b40ccc3c95400786\": container with ID starting with 90a647427f6adcc0a95e3b5049458fc47738fe641231fc03b40ccc3c95400786 not found: ID does not exist" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.872662 5118 scope.go:117] "RemoveContainer" containerID="a115df047e5c59266fc13d616ad015150425f0493f706d59f2346747e8a779e4" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.872919 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a115df047e5c59266fc13d616ad015150425f0493f706d59f2346747e8a779e4"} err="failed to get container status \"a115df047e5c59266fc13d616ad015150425f0493f706d59f2346747e8a779e4\": rpc error: code = NotFound desc = could not find container \"a115df047e5c59266fc13d616ad015150425f0493f706d59f2346747e8a779e4\": container with ID starting with a115df047e5c59266fc13d616ad015150425f0493f706d59f2346747e8a779e4 not found: ID does not exist" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.872944 5118 scope.go:117] "RemoveContainer" containerID="94a3e70f9f409de1b5a37032e8595edbddd745578a1ca5c5880941e50c94d100" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.873157 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94a3e70f9f409de1b5a37032e8595edbddd745578a1ca5c5880941e50c94d100"} err="failed to get container status \"94a3e70f9f409de1b5a37032e8595edbddd745578a1ca5c5880941e50c94d100\": rpc error: code = NotFound desc = could not find container \"94a3e70f9f409de1b5a37032e8595edbddd745578a1ca5c5880941e50c94d100\": container with ID starting with 94a3e70f9f409de1b5a37032e8595edbddd745578a1ca5c5880941e50c94d100 not found: ID does not exist" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.873184 5118 scope.go:117] "RemoveContainer" containerID="6c02426ea159bbe26cc8542b53a59e18f979d900d431361620bba4814036f665" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.873388 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c02426ea159bbe26cc8542b53a59e18f979d900d431361620bba4814036f665"} err="failed to get container status \"6c02426ea159bbe26cc8542b53a59e18f979d900d431361620bba4814036f665\": rpc error: code = NotFound desc = could not find container \"6c02426ea159bbe26cc8542b53a59e18f979d900d431361620bba4814036f665\": container with ID starting with 6c02426ea159bbe26cc8542b53a59e18f979d900d431361620bba4814036f665 not found: ID does not exist" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.873413 5118 scope.go:117] "RemoveContainer" containerID="33d781beb8d8052347450f2300267e6ffbb6a918f9201e9b447140356c184fa3" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.873655 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33d781beb8d8052347450f2300267e6ffbb6a918f9201e9b447140356c184fa3"} err="failed to get container status \"33d781beb8d8052347450f2300267e6ffbb6a918f9201e9b447140356c184fa3\": rpc error: code = NotFound desc = could not find container \"33d781beb8d8052347450f2300267e6ffbb6a918f9201e9b447140356c184fa3\": container with ID starting with 33d781beb8d8052347450f2300267e6ffbb6a918f9201e9b447140356c184fa3 not found: ID does not exist" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.873683 5118 scope.go:117] "RemoveContainer" containerID="9ad93d634d87a7a7587fe23f469ba02f057130c392df5c1071a94fb11c762b67" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.874067 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ad93d634d87a7a7587fe23f469ba02f057130c392df5c1071a94fb11c762b67"} err="failed to get container status \"9ad93d634d87a7a7587fe23f469ba02f057130c392df5c1071a94fb11c762b67\": rpc error: code = NotFound desc = could not find container \"9ad93d634d87a7a7587fe23f469ba02f057130c392df5c1071a94fb11c762b67\": container with ID starting with 9ad93d634d87a7a7587fe23f469ba02f057130c392df5c1071a94fb11c762b67 not found: ID does not exist" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.874099 5118 scope.go:117] "RemoveContainer" containerID="3273cab25da83878f46c4a13c311a3c4fc6db1eebc9dad9ad1ad5373511dfee7" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.874379 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3273cab25da83878f46c4a13c311a3c4fc6db1eebc9dad9ad1ad5373511dfee7"} err="failed to get container status \"3273cab25da83878f46c4a13c311a3c4fc6db1eebc9dad9ad1ad5373511dfee7\": rpc error: code = NotFound desc = could not find container \"3273cab25da83878f46c4a13c311a3c4fc6db1eebc9dad9ad1ad5373511dfee7\": container with ID starting with 3273cab25da83878f46c4a13c311a3c4fc6db1eebc9dad9ad1ad5373511dfee7 not found: ID does not exist" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.874403 5118 scope.go:117] "RemoveContainer" containerID="2f02199ab806ead16e41b396d0822ce0742a12d090da6ddeafad84473f0df5b1" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.875147 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f02199ab806ead16e41b396d0822ce0742a12d090da6ddeafad84473f0df5b1"} err="failed to get container status \"2f02199ab806ead16e41b396d0822ce0742a12d090da6ddeafad84473f0df5b1\": rpc error: code = NotFound desc = could not find container \"2f02199ab806ead16e41b396d0822ce0742a12d090da6ddeafad84473f0df5b1\": container with ID starting with 2f02199ab806ead16e41b396d0822ce0742a12d090da6ddeafad84473f0df5b1 not found: ID does not exist" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.875172 5118 scope.go:117] "RemoveContainer" containerID="3bff84e1b7624dd37333ecc0931e980c01f0213323f41dbfb457726159b771a7" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.877011 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3bff84e1b7624dd37333ecc0931e980c01f0213323f41dbfb457726159b771a7"} err="failed to get container status \"3bff84e1b7624dd37333ecc0931e980c01f0213323f41dbfb457726159b771a7\": rpc error: code = NotFound desc = could not find container \"3bff84e1b7624dd37333ecc0931e980c01f0213323f41dbfb457726159b771a7\": container with ID starting with 3bff84e1b7624dd37333ecc0931e980c01f0213323f41dbfb457726159b771a7 not found: ID does not exist" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.877028 5118 scope.go:117] "RemoveContainer" containerID="90a647427f6adcc0a95e3b5049458fc47738fe641231fc03b40ccc3c95400786" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.885434 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90a647427f6adcc0a95e3b5049458fc47738fe641231fc03b40ccc3c95400786"} err="failed to get container status \"90a647427f6adcc0a95e3b5049458fc47738fe641231fc03b40ccc3c95400786\": rpc error: code = NotFound desc = could not find container \"90a647427f6adcc0a95e3b5049458fc47738fe641231fc03b40ccc3c95400786\": container with ID starting with 90a647427f6adcc0a95e3b5049458fc47738fe641231fc03b40ccc3c95400786 not found: ID does not exist" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.885455 5118 scope.go:117] "RemoveContainer" containerID="a115df047e5c59266fc13d616ad015150425f0493f706d59f2346747e8a779e4" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.885897 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a115df047e5c59266fc13d616ad015150425f0493f706d59f2346747e8a779e4"} err="failed to get container status \"a115df047e5c59266fc13d616ad015150425f0493f706d59f2346747e8a779e4\": rpc error: code = NotFound desc = could not find container \"a115df047e5c59266fc13d616ad015150425f0493f706d59f2346747e8a779e4\": container with ID starting with a115df047e5c59266fc13d616ad015150425f0493f706d59f2346747e8a779e4 not found: ID does not exist" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.885980 5118 scope.go:117] "RemoveContainer" containerID="94a3e70f9f409de1b5a37032e8595edbddd745578a1ca5c5880941e50c94d100" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.886817 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94a3e70f9f409de1b5a37032e8595edbddd745578a1ca5c5880941e50c94d100"} err="failed to get container status \"94a3e70f9f409de1b5a37032e8595edbddd745578a1ca5c5880941e50c94d100\": rpc error: code = NotFound desc = could not find container \"94a3e70f9f409de1b5a37032e8595edbddd745578a1ca5c5880941e50c94d100\": container with ID starting with 94a3e70f9f409de1b5a37032e8595edbddd745578a1ca5c5880941e50c94d100 not found: ID does not exist" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.886839 5118 scope.go:117] "RemoveContainer" containerID="6c02426ea159bbe26cc8542b53a59e18f979d900d431361620bba4814036f665" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.887080 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c02426ea159bbe26cc8542b53a59e18f979d900d431361620bba4814036f665"} err="failed to get container status \"6c02426ea159bbe26cc8542b53a59e18f979d900d431361620bba4814036f665\": rpc error: code = NotFound desc = could not find container \"6c02426ea159bbe26cc8542b53a59e18f979d900d431361620bba4814036f665\": container with ID starting with 6c02426ea159bbe26cc8542b53a59e18f979d900d431361620bba4814036f665 not found: ID does not exist" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.887109 5118 scope.go:117] "RemoveContainer" containerID="33d781beb8d8052347450f2300267e6ffbb6a918f9201e9b447140356c184fa3" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.887386 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33d781beb8d8052347450f2300267e6ffbb6a918f9201e9b447140356c184fa3"} err="failed to get container status \"33d781beb8d8052347450f2300267e6ffbb6a918f9201e9b447140356c184fa3\": rpc error: code = NotFound desc = could not find container \"33d781beb8d8052347450f2300267e6ffbb6a918f9201e9b447140356c184fa3\": container with ID starting with 33d781beb8d8052347450f2300267e6ffbb6a918f9201e9b447140356c184fa3 not found: ID does not exist" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.887417 5118 scope.go:117] "RemoveContainer" containerID="9ad93d634d87a7a7587fe23f469ba02f057130c392df5c1071a94fb11c762b67" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.888820 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ad93d634d87a7a7587fe23f469ba02f057130c392df5c1071a94fb11c762b67"} err="failed to get container status \"9ad93d634d87a7a7587fe23f469ba02f057130c392df5c1071a94fb11c762b67\": rpc error: code = NotFound desc = could not find container \"9ad93d634d87a7a7587fe23f469ba02f057130c392df5c1071a94fb11c762b67\": container with ID starting with 9ad93d634d87a7a7587fe23f469ba02f057130c392df5c1071a94fb11c762b67 not found: ID does not exist" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.888847 5118 scope.go:117] "RemoveContainer" containerID="3273cab25da83878f46c4a13c311a3c4fc6db1eebc9dad9ad1ad5373511dfee7" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.889209 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3273cab25da83878f46c4a13c311a3c4fc6db1eebc9dad9ad1ad5373511dfee7"} err="failed to get container status \"3273cab25da83878f46c4a13c311a3c4fc6db1eebc9dad9ad1ad5373511dfee7\": rpc error: code = NotFound desc = could not find container \"3273cab25da83878f46c4a13c311a3c4fc6db1eebc9dad9ad1ad5373511dfee7\": container with ID starting with 3273cab25da83878f46c4a13c311a3c4fc6db1eebc9dad9ad1ad5373511dfee7 not found: ID does not exist" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.889238 5118 scope.go:117] "RemoveContainer" containerID="2f02199ab806ead16e41b396d0822ce0742a12d090da6ddeafad84473f0df5b1" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.889461 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f02199ab806ead16e41b396d0822ce0742a12d090da6ddeafad84473f0df5b1"} err="failed to get container status \"2f02199ab806ead16e41b396d0822ce0742a12d090da6ddeafad84473f0df5b1\": rpc error: code = NotFound desc = could not find container \"2f02199ab806ead16e41b396d0822ce0742a12d090da6ddeafad84473f0df5b1\": container with ID starting with 2f02199ab806ead16e41b396d0822ce0742a12d090da6ddeafad84473f0df5b1 not found: ID does not exist" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.889480 5118 scope.go:117] "RemoveContainer" containerID="3bff84e1b7624dd37333ecc0931e980c01f0213323f41dbfb457726159b771a7" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.889932 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3bff84e1b7624dd37333ecc0931e980c01f0213323f41dbfb457726159b771a7"} err="failed to get container status \"3bff84e1b7624dd37333ecc0931e980c01f0213323f41dbfb457726159b771a7\": rpc error: code = NotFound desc = could not find container \"3bff84e1b7624dd37333ecc0931e980c01f0213323f41dbfb457726159b771a7\": container with ID starting with 3bff84e1b7624dd37333ecc0931e980c01f0213323f41dbfb457726159b771a7 not found: ID does not exist" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.889960 5118 scope.go:117] "RemoveContainer" containerID="90a647427f6adcc0a95e3b5049458fc47738fe641231fc03b40ccc3c95400786" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.890254 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90a647427f6adcc0a95e3b5049458fc47738fe641231fc03b40ccc3c95400786"} err="failed to get container status \"90a647427f6adcc0a95e3b5049458fc47738fe641231fc03b40ccc3c95400786\": rpc error: code = NotFound desc = could not find container \"90a647427f6adcc0a95e3b5049458fc47738fe641231fc03b40ccc3c95400786\": container with ID starting with 90a647427f6adcc0a95e3b5049458fc47738fe641231fc03b40ccc3c95400786 not found: ID does not exist" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.890288 5118 scope.go:117] "RemoveContainer" containerID="a115df047e5c59266fc13d616ad015150425f0493f706d59f2346747e8a779e4" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.890790 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a115df047e5c59266fc13d616ad015150425f0493f706d59f2346747e8a779e4"} err="failed to get container status \"a115df047e5c59266fc13d616ad015150425f0493f706d59f2346747e8a779e4\": rpc error: code = NotFound desc = could not find container \"a115df047e5c59266fc13d616ad015150425f0493f706d59f2346747e8a779e4\": container with ID starting with a115df047e5c59266fc13d616ad015150425f0493f706d59f2346747e8a779e4 not found: ID does not exist" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.898496 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/abafac28-99fe-42a6-bee9-f3fb197b1bc2-etc-openvswitch\") pod \"ovnkube-node-gpg4k\" (UID: \"abafac28-99fe-42a6-bee9-f3fb197b1bc2\") " pod="openshift-ovn-kubernetes/ovnkube-node-gpg4k" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.898525 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/abafac28-99fe-42a6-bee9-f3fb197b1bc2-var-lib-openvswitch\") pod \"ovnkube-node-gpg4k\" (UID: \"abafac28-99fe-42a6-bee9-f3fb197b1bc2\") " pod="openshift-ovn-kubernetes/ovnkube-node-gpg4k" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.898542 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/abafac28-99fe-42a6-bee9-f3fb197b1bc2-systemd-units\") pod \"ovnkube-node-gpg4k\" (UID: \"abafac28-99fe-42a6-bee9-f3fb197b1bc2\") " pod="openshift-ovn-kubernetes/ovnkube-node-gpg4k" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.898563 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-v7zw8\" (UniqueName: \"kubernetes.io/projected/abafac28-99fe-42a6-bee9-f3fb197b1bc2-kube-api-access-v7zw8\") pod \"ovnkube-node-gpg4k\" (UID: \"abafac28-99fe-42a6-bee9-f3fb197b1bc2\") " pod="openshift-ovn-kubernetes/ovnkube-node-gpg4k" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.898587 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/abafac28-99fe-42a6-bee9-f3fb197b1bc2-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-gpg4k\" (UID: \"abafac28-99fe-42a6-bee9-f3fb197b1bc2\") " pod="openshift-ovn-kubernetes/ovnkube-node-gpg4k" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.898641 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/abafac28-99fe-42a6-bee9-f3fb197b1bc2-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-gpg4k\" (UID: \"abafac28-99fe-42a6-bee9-f3fb197b1bc2\") " pod="openshift-ovn-kubernetes/ovnkube-node-gpg4k" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.898673 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/abafac28-99fe-42a6-bee9-f3fb197b1bc2-systemd-units\") pod \"ovnkube-node-gpg4k\" (UID: \"abafac28-99fe-42a6-bee9-f3fb197b1bc2\") " pod="openshift-ovn-kubernetes/ovnkube-node-gpg4k" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.898682 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/abafac28-99fe-42a6-bee9-f3fb197b1bc2-var-lib-openvswitch\") pod \"ovnkube-node-gpg4k\" (UID: \"abafac28-99fe-42a6-bee9-f3fb197b1bc2\") " pod="openshift-ovn-kubernetes/ovnkube-node-gpg4k" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.898749 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/abafac28-99fe-42a6-bee9-f3fb197b1bc2-env-overrides\") pod \"ovnkube-node-gpg4k\" (UID: \"abafac28-99fe-42a6-bee9-f3fb197b1bc2\") " pod="openshift-ovn-kubernetes/ovnkube-node-gpg4k" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.898828 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/abafac28-99fe-42a6-bee9-f3fb197b1bc2-host-cni-netd\") pod \"ovnkube-node-gpg4k\" (UID: \"abafac28-99fe-42a6-bee9-f3fb197b1bc2\") " pod="openshift-ovn-kubernetes/ovnkube-node-gpg4k" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.898865 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/abafac28-99fe-42a6-bee9-f3fb197b1bc2-ovnkube-config\") pod \"ovnkube-node-gpg4k\" (UID: \"abafac28-99fe-42a6-bee9-f3fb197b1bc2\") " pod="openshift-ovn-kubernetes/ovnkube-node-gpg4k" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.898937 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/abafac28-99fe-42a6-bee9-f3fb197b1bc2-etc-openvswitch\") pod \"ovnkube-node-gpg4k\" (UID: \"abafac28-99fe-42a6-bee9-f3fb197b1bc2\") " pod="openshift-ovn-kubernetes/ovnkube-node-gpg4k" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.898951 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/abafac28-99fe-42a6-bee9-f3fb197b1bc2-run-systemd\") pod \"ovnkube-node-gpg4k\" (UID: \"abafac28-99fe-42a6-bee9-f3fb197b1bc2\") " pod="openshift-ovn-kubernetes/ovnkube-node-gpg4k" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.898975 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/abafac28-99fe-42a6-bee9-f3fb197b1bc2-host-cni-netd\") pod \"ovnkube-node-gpg4k\" (UID: \"abafac28-99fe-42a6-bee9-f3fb197b1bc2\") " pod="openshift-ovn-kubernetes/ovnkube-node-gpg4k" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.899025 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/abafac28-99fe-42a6-bee9-f3fb197b1bc2-log-socket\") pod \"ovnkube-node-gpg4k\" (UID: \"abafac28-99fe-42a6-bee9-f3fb197b1bc2\") " pod="openshift-ovn-kubernetes/ovnkube-node-gpg4k" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.899064 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/abafac28-99fe-42a6-bee9-f3fb197b1bc2-host-slash\") pod \"ovnkube-node-gpg4k\" (UID: \"abafac28-99fe-42a6-bee9-f3fb197b1bc2\") " pod="openshift-ovn-kubernetes/ovnkube-node-gpg4k" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.899109 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/abafac28-99fe-42a6-bee9-f3fb197b1bc2-run-openvswitch\") pod \"ovnkube-node-gpg4k\" (UID: \"abafac28-99fe-42a6-bee9-f3fb197b1bc2\") " pod="openshift-ovn-kubernetes/ovnkube-node-gpg4k" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.899158 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/abafac28-99fe-42a6-bee9-f3fb197b1bc2-host-run-netns\") pod \"ovnkube-node-gpg4k\" (UID: \"abafac28-99fe-42a6-bee9-f3fb197b1bc2\") " pod="openshift-ovn-kubernetes/ovnkube-node-gpg4k" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.899215 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/abafac28-99fe-42a6-bee9-f3fb197b1bc2-ovn-node-metrics-cert\") pod \"ovnkube-node-gpg4k\" (UID: \"abafac28-99fe-42a6-bee9-f3fb197b1bc2\") " pod="openshift-ovn-kubernetes/ovnkube-node-gpg4k" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.899240 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/abafac28-99fe-42a6-bee9-f3fb197b1bc2-ovnkube-script-lib\") pod \"ovnkube-node-gpg4k\" (UID: \"abafac28-99fe-42a6-bee9-f3fb197b1bc2\") " pod="openshift-ovn-kubernetes/ovnkube-node-gpg4k" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.899305 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/abafac28-99fe-42a6-bee9-f3fb197b1bc2-host-run-ovn-kubernetes\") pod \"ovnkube-node-gpg4k\" (UID: \"abafac28-99fe-42a6-bee9-f3fb197b1bc2\") " pod="openshift-ovn-kubernetes/ovnkube-node-gpg4k" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.899379 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/abafac28-99fe-42a6-bee9-f3fb197b1bc2-host-kubelet\") pod \"ovnkube-node-gpg4k\" (UID: \"abafac28-99fe-42a6-bee9-f3fb197b1bc2\") " pod="openshift-ovn-kubernetes/ovnkube-node-gpg4k" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.899423 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/abafac28-99fe-42a6-bee9-f3fb197b1bc2-env-overrides\") pod \"ovnkube-node-gpg4k\" (UID: \"abafac28-99fe-42a6-bee9-f3fb197b1bc2\") " pod="openshift-ovn-kubernetes/ovnkube-node-gpg4k" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.899449 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/abafac28-99fe-42a6-bee9-f3fb197b1bc2-node-log\") pod \"ovnkube-node-gpg4k\" (UID: \"abafac28-99fe-42a6-bee9-f3fb197b1bc2\") " pod="openshift-ovn-kubernetes/ovnkube-node-gpg4k" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.899463 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/abafac28-99fe-42a6-bee9-f3fb197b1bc2-host-run-netns\") pod \"ovnkube-node-gpg4k\" (UID: \"abafac28-99fe-42a6-bee9-f3fb197b1bc2\") " pod="openshift-ovn-kubernetes/ovnkube-node-gpg4k" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.899491 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/abafac28-99fe-42a6-bee9-f3fb197b1bc2-log-socket\") pod \"ovnkube-node-gpg4k\" (UID: \"abafac28-99fe-42a6-bee9-f3fb197b1bc2\") " pod="openshift-ovn-kubernetes/ovnkube-node-gpg4k" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.899513 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/abafac28-99fe-42a6-bee9-f3fb197b1bc2-host-slash\") pod \"ovnkube-node-gpg4k\" (UID: \"abafac28-99fe-42a6-bee9-f3fb197b1bc2\") " pod="openshift-ovn-kubernetes/ovnkube-node-gpg4k" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.899538 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/abafac28-99fe-42a6-bee9-f3fb197b1bc2-run-openvswitch\") pod \"ovnkube-node-gpg4k\" (UID: \"abafac28-99fe-42a6-bee9-f3fb197b1bc2\") " pod="openshift-ovn-kubernetes/ovnkube-node-gpg4k" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.899554 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/abafac28-99fe-42a6-bee9-f3fb197b1bc2-run-systemd\") pod \"ovnkube-node-gpg4k\" (UID: \"abafac28-99fe-42a6-bee9-f3fb197b1bc2\") " pod="openshift-ovn-kubernetes/ovnkube-node-gpg4k" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.899560 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/abafac28-99fe-42a6-bee9-f3fb197b1bc2-host-run-ovn-kubernetes\") pod \"ovnkube-node-gpg4k\" (UID: \"abafac28-99fe-42a6-bee9-f3fb197b1bc2\") " pod="openshift-ovn-kubernetes/ovnkube-node-gpg4k" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.899618 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/abafac28-99fe-42a6-bee9-f3fb197b1bc2-run-ovn\") pod \"ovnkube-node-gpg4k\" (UID: \"abafac28-99fe-42a6-bee9-f3fb197b1bc2\") " pod="openshift-ovn-kubernetes/ovnkube-node-gpg4k" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.899639 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/abafac28-99fe-42a6-bee9-f3fb197b1bc2-host-cni-bin\") pod \"ovnkube-node-gpg4k\" (UID: \"abafac28-99fe-42a6-bee9-f3fb197b1bc2\") " pod="openshift-ovn-kubernetes/ovnkube-node-gpg4k" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.899676 5118 reconciler_common.go:299] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/cf2ef9c8-a4c9-4273-9236-43ea37f1e277-run-systemd\") on node \"crc\" DevicePath \"\"" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.899686 5118 reconciler_common.go:299] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cf2ef9c8-a4c9-4273-9236-43ea37f1e277-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.899696 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2ftnr\" (UniqueName: \"kubernetes.io/projected/cf2ef9c8-a4c9-4273-9236-43ea37f1e277-kube-api-access-2ftnr\") on node \"crc\" DevicePath \"\"" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.899716 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/abafac28-99fe-42a6-bee9-f3fb197b1bc2-host-cni-bin\") pod \"ovnkube-node-gpg4k\" (UID: \"abafac28-99fe-42a6-bee9-f3fb197b1bc2\") " pod="openshift-ovn-kubernetes/ovnkube-node-gpg4k" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.899738 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/abafac28-99fe-42a6-bee9-f3fb197b1bc2-run-ovn\") pod \"ovnkube-node-gpg4k\" (UID: \"abafac28-99fe-42a6-bee9-f3fb197b1bc2\") " pod="openshift-ovn-kubernetes/ovnkube-node-gpg4k" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.899777 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/abafac28-99fe-42a6-bee9-f3fb197b1bc2-ovnkube-config\") pod \"ovnkube-node-gpg4k\" (UID: \"abafac28-99fe-42a6-bee9-f3fb197b1bc2\") " pod="openshift-ovn-kubernetes/ovnkube-node-gpg4k" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.899833 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/abafac28-99fe-42a6-bee9-f3fb197b1bc2-node-log\") pod \"ovnkube-node-gpg4k\" (UID: \"abafac28-99fe-42a6-bee9-f3fb197b1bc2\") " pod="openshift-ovn-kubernetes/ovnkube-node-gpg4k" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.899868 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/abafac28-99fe-42a6-bee9-f3fb197b1bc2-host-kubelet\") pod \"ovnkube-node-gpg4k\" (UID: \"abafac28-99fe-42a6-bee9-f3fb197b1bc2\") " pod="openshift-ovn-kubernetes/ovnkube-node-gpg4k" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.900187 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/abafac28-99fe-42a6-bee9-f3fb197b1bc2-ovnkube-script-lib\") pod \"ovnkube-node-gpg4k\" (UID: \"abafac28-99fe-42a6-bee9-f3fb197b1bc2\") " pod="openshift-ovn-kubernetes/ovnkube-node-gpg4k" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.907228 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/abafac28-99fe-42a6-bee9-f3fb197b1bc2-ovn-node-metrics-cert\") pod \"ovnkube-node-gpg4k\" (UID: \"abafac28-99fe-42a6-bee9-f3fb197b1bc2\") " pod="openshift-ovn-kubernetes/ovnkube-node-gpg4k" Dec 08 17:53:41 crc kubenswrapper[5118]: I1208 17:53:41.914568 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7zw8\" (UniqueName: \"kubernetes.io/projected/abafac28-99fe-42a6-bee9-f3fb197b1bc2-kube-api-access-v7zw8\") pod \"ovnkube-node-gpg4k\" (UID: \"abafac28-99fe-42a6-bee9-f3fb197b1bc2\") " pod="openshift-ovn-kubernetes/ovnkube-node-gpg4k" Dec 08 17:53:42 crc kubenswrapper[5118]: I1208 17:53:42.014340 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-wr4x4"] Dec 08 17:53:42 crc kubenswrapper[5118]: I1208 17:53:42.018955 5118 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-wr4x4"] Dec 08 17:53:42 crc kubenswrapper[5118]: I1208 17:53:42.062864 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-gpg4k" Dec 08 17:53:42 crc kubenswrapper[5118]: W1208 17:53:42.083328 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podabafac28_99fe_42a6_bee9_f3fb197b1bc2.slice/crio-15bd691f4b9444f9d0c44dd721272e1c1cd04d0736bb19c88d92ebdcfcbf1511 WatchSource:0}: Error finding container 15bd691f4b9444f9d0c44dd721272e1c1cd04d0736bb19c88d92ebdcfcbf1511: Status 404 returned error can't find the container with id 15bd691f4b9444f9d0c44dd721272e1c1cd04d0736bb19c88d92ebdcfcbf1511 Dec 08 17:53:42 crc kubenswrapper[5118]: I1208 17:53:42.713350 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-dlvbf_a091751f-234c-43ee-8324-ebb98bb3ec36/kube-multus/0.log" Dec 08 17:53:42 crc kubenswrapper[5118]: I1208 17:53:42.713537 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-dlvbf" event={"ID":"a091751f-234c-43ee-8324-ebb98bb3ec36","Type":"ContainerStarted","Data":"5391a68838037a225be477e32151007942d1bd8daf15956a0060339d8cf75a28"} Dec 08 17:53:42 crc kubenswrapper[5118]: I1208 17:53:42.716074 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-lfp2m" event={"ID":"8105d3ef-5e53-4418-9d0c-12f9b6ffa67f","Type":"ContainerStarted","Data":"e63c2cba44bfcdac0de47aa706c43352077941330917efddf560ea4443c9af2e"} Dec 08 17:53:42 crc kubenswrapper[5118]: I1208 17:53:42.716512 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-lfp2m" event={"ID":"8105d3ef-5e53-4418-9d0c-12f9b6ffa67f","Type":"ContainerStarted","Data":"96079cb701770d9fae109305100d0072507a63c5fca2442af83a6413475f5be7"} Dec 08 17:53:42 crc kubenswrapper[5118]: I1208 17:53:42.716720 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-lfp2m" event={"ID":"8105d3ef-5e53-4418-9d0c-12f9b6ffa67f","Type":"ContainerStarted","Data":"413d37103d93ea50c7e80e700c332ed8a94efd9a7a074134c96cb5e94fe9b987"} Dec 08 17:53:42 crc kubenswrapper[5118]: I1208 17:53:42.719193 5118 generic.go:358] "Generic (PLEG): container finished" podID="abafac28-99fe-42a6-bee9-f3fb197b1bc2" containerID="6d84d443a58281d2767b9dacdf072b18618dbdef8e36b289cc591e931bd2a163" exitCode=0 Dec 08 17:53:42 crc kubenswrapper[5118]: I1208 17:53:42.719242 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gpg4k" event={"ID":"abafac28-99fe-42a6-bee9-f3fb197b1bc2","Type":"ContainerDied","Data":"6d84d443a58281d2767b9dacdf072b18618dbdef8e36b289cc591e931bd2a163"} Dec 08 17:53:42 crc kubenswrapper[5118]: I1208 17:53:42.719268 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gpg4k" event={"ID":"abafac28-99fe-42a6-bee9-f3fb197b1bc2","Type":"ContainerStarted","Data":"15bd691f4b9444f9d0c44dd721272e1c1cd04d0736bb19c88d92ebdcfcbf1511"} Dec 08 17:53:42 crc kubenswrapper[5118]: I1208 17:53:42.781455 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-lfp2m" podStartSLOduration=1.781429779 podStartE2EDuration="1.781429779s" podCreationTimestamp="2025-12-08 17:53:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:53:42.775683393 +0000 UTC m=+679.677007557" watchObservedRunningTime="2025-12-08 17:53:42.781429779 +0000 UTC m=+679.682753913" Dec 08 17:53:43 crc kubenswrapper[5118]: I1208 17:53:43.443265 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94c49b3d-dce8-4a73-895a-32a521a06b22" path="/var/lib/kubelet/pods/94c49b3d-dce8-4a73-895a-32a521a06b22/volumes" Dec 08 17:53:43 crc kubenswrapper[5118]: I1208 17:53:43.444632 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf2ef9c8-a4c9-4273-9236-43ea37f1e277" path="/var/lib/kubelet/pods/cf2ef9c8-a4c9-4273-9236-43ea37f1e277/volumes" Dec 08 17:53:43 crc kubenswrapper[5118]: I1208 17:53:43.737860 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gpg4k" event={"ID":"abafac28-99fe-42a6-bee9-f3fb197b1bc2","Type":"ContainerStarted","Data":"e92114ed87f72e434a195909e4a58ced503abf762db1947ea7ba6695b9d503d7"} Dec 08 17:53:43 crc kubenswrapper[5118]: I1208 17:53:43.737932 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gpg4k" event={"ID":"abafac28-99fe-42a6-bee9-f3fb197b1bc2","Type":"ContainerStarted","Data":"deeb802bf7c772f8b862a54f158a332bc3d460409faa4727a1df883ebd121882"} Dec 08 17:53:43 crc kubenswrapper[5118]: I1208 17:53:43.737946 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gpg4k" event={"ID":"abafac28-99fe-42a6-bee9-f3fb197b1bc2","Type":"ContainerStarted","Data":"11988fa395e5e74a8f3f8f479de0c085b2d7d239caa1935f1f0892750e650dd7"} Dec 08 17:53:43 crc kubenswrapper[5118]: I1208 17:53:43.737960 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gpg4k" event={"ID":"abafac28-99fe-42a6-bee9-f3fb197b1bc2","Type":"ContainerStarted","Data":"9579abb04674800ba0ccb10ff7b5db696c77508c14491cef70fcab743f5c9bae"} Dec 08 17:53:43 crc kubenswrapper[5118]: I1208 17:53:43.737972 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gpg4k" event={"ID":"abafac28-99fe-42a6-bee9-f3fb197b1bc2","Type":"ContainerStarted","Data":"ac3153f4397f68d600726682969fd3051e0067790b08d42efa7f805694c1dfee"} Dec 08 17:53:43 crc kubenswrapper[5118]: I1208 17:53:43.737982 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gpg4k" event={"ID":"abafac28-99fe-42a6-bee9-f3fb197b1bc2","Type":"ContainerStarted","Data":"4f353c9f9aaed4af963b46fe1bb214a5fdb0bc4ac9cfd0d376d2bcffa167ad15"} Dec 08 17:53:46 crc kubenswrapper[5118]: I1208 17:53:46.759923 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gpg4k" event={"ID":"abafac28-99fe-42a6-bee9-f3fb197b1bc2","Type":"ContainerStarted","Data":"3b6e3043c4fc20a112ef04d9ae4128d681c6c44c597f4abf95ca0f01b75af10c"} Dec 08 17:53:49 crc kubenswrapper[5118]: I1208 17:53:49.783472 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gpg4k" event={"ID":"abafac28-99fe-42a6-bee9-f3fb197b1bc2","Type":"ContainerStarted","Data":"0d5eaa48d009bad8d2a3c97e4c95015a7d411e448b333ac2660a66bb876add20"} Dec 08 17:53:49 crc kubenswrapper[5118]: I1208 17:53:49.783981 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-gpg4k" Dec 08 17:53:49 crc kubenswrapper[5118]: I1208 17:53:49.784016 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-gpg4k" Dec 08 17:53:49 crc kubenswrapper[5118]: I1208 17:53:49.784031 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-gpg4k" Dec 08 17:53:49 crc kubenswrapper[5118]: I1208 17:53:49.813740 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-gpg4k" Dec 08 17:53:49 crc kubenswrapper[5118]: I1208 17:53:49.814657 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-gpg4k" Dec 08 17:53:49 crc kubenswrapper[5118]: I1208 17:53:49.838047 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-gpg4k" podStartSLOduration=8.838025598 podStartE2EDuration="8.838025598s" podCreationTimestamp="2025-12-08 17:53:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:53:49.834970934 +0000 UTC m=+686.736295048" watchObservedRunningTime="2025-12-08 17:53:49.838025598 +0000 UTC m=+686.739349692" Dec 08 17:54:02 crc kubenswrapper[5118]: I1208 17:54:02.112079 5118 patch_prober.go:28] interesting pod/machine-config-daemon-8vxnt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 17:54:02 crc kubenswrapper[5118]: I1208 17:54:02.112798 5118 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8vxnt" podUID="cee6a3dc-47d4-4996-9c78-cb6c6b626d71" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 17:54:21 crc kubenswrapper[5118]: I1208 17:54:21.830323 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-gpg4k" Dec 08 17:54:23 crc kubenswrapper[5118]: I1208 17:54:23.911660 5118 scope.go:117] "RemoveContainer" containerID="7f99e1c04090370d3c6660cfd043ab828963f2dd41095e28df8b18683736ff50" Dec 08 17:54:23 crc kubenswrapper[5118]: I1208 17:54:23.938388 5118 scope.go:117] "RemoveContainer" containerID="a5876b9a1fb17784bd002aa72e8228c02623f890ab1a65b0b7bf1f47a9b4dad1" Dec 08 17:54:28 crc kubenswrapper[5118]: I1208 17:54:28.431691 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-tkpnz"] Dec 08 17:54:28 crc kubenswrapper[5118]: I1208 17:54:28.447306 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tkpnz"] Dec 08 17:54:28 crc kubenswrapper[5118]: I1208 17:54:28.447525 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tkpnz" Dec 08 17:54:28 crc kubenswrapper[5118]: I1208 17:54:28.487365 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d7eb264-8a42-4885-a9db-a693bf3911c8-catalog-content\") pod \"certified-operators-tkpnz\" (UID: \"2d7eb264-8a42-4885-a9db-a693bf3911c8\") " pod="openshift-marketplace/certified-operators-tkpnz" Dec 08 17:54:28 crc kubenswrapper[5118]: I1208 17:54:28.487660 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lppnh\" (UniqueName: \"kubernetes.io/projected/2d7eb264-8a42-4885-a9db-a693bf3911c8-kube-api-access-lppnh\") pod \"certified-operators-tkpnz\" (UID: \"2d7eb264-8a42-4885-a9db-a693bf3911c8\") " pod="openshift-marketplace/certified-operators-tkpnz" Dec 08 17:54:28 crc kubenswrapper[5118]: I1208 17:54:28.487737 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d7eb264-8a42-4885-a9db-a693bf3911c8-utilities\") pod \"certified-operators-tkpnz\" (UID: \"2d7eb264-8a42-4885-a9db-a693bf3911c8\") " pod="openshift-marketplace/certified-operators-tkpnz" Dec 08 17:54:28 crc kubenswrapper[5118]: I1208 17:54:28.589473 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lppnh\" (UniqueName: \"kubernetes.io/projected/2d7eb264-8a42-4885-a9db-a693bf3911c8-kube-api-access-lppnh\") pod \"certified-operators-tkpnz\" (UID: \"2d7eb264-8a42-4885-a9db-a693bf3911c8\") " pod="openshift-marketplace/certified-operators-tkpnz" Dec 08 17:54:28 crc kubenswrapper[5118]: I1208 17:54:28.589515 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d7eb264-8a42-4885-a9db-a693bf3911c8-utilities\") pod \"certified-operators-tkpnz\" (UID: \"2d7eb264-8a42-4885-a9db-a693bf3911c8\") " pod="openshift-marketplace/certified-operators-tkpnz" Dec 08 17:54:28 crc kubenswrapper[5118]: I1208 17:54:28.589585 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d7eb264-8a42-4885-a9db-a693bf3911c8-catalog-content\") pod \"certified-operators-tkpnz\" (UID: \"2d7eb264-8a42-4885-a9db-a693bf3911c8\") " pod="openshift-marketplace/certified-operators-tkpnz" Dec 08 17:54:28 crc kubenswrapper[5118]: I1208 17:54:28.590101 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d7eb264-8a42-4885-a9db-a693bf3911c8-catalog-content\") pod \"certified-operators-tkpnz\" (UID: \"2d7eb264-8a42-4885-a9db-a693bf3911c8\") " pod="openshift-marketplace/certified-operators-tkpnz" Dec 08 17:54:28 crc kubenswrapper[5118]: I1208 17:54:28.590200 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d7eb264-8a42-4885-a9db-a693bf3911c8-utilities\") pod \"certified-operators-tkpnz\" (UID: \"2d7eb264-8a42-4885-a9db-a693bf3911c8\") " pod="openshift-marketplace/certified-operators-tkpnz" Dec 08 17:54:28 crc kubenswrapper[5118]: I1208 17:54:28.616236 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lppnh\" (UniqueName: \"kubernetes.io/projected/2d7eb264-8a42-4885-a9db-a693bf3911c8-kube-api-access-lppnh\") pod \"certified-operators-tkpnz\" (UID: \"2d7eb264-8a42-4885-a9db-a693bf3911c8\") " pod="openshift-marketplace/certified-operators-tkpnz" Dec 08 17:54:28 crc kubenswrapper[5118]: I1208 17:54:28.785827 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tkpnz" Dec 08 17:54:28 crc kubenswrapper[5118]: I1208 17:54:28.973249 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tkpnz"] Dec 08 17:54:29 crc kubenswrapper[5118]: I1208 17:54:29.035456 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tkpnz" event={"ID":"2d7eb264-8a42-4885-a9db-a693bf3911c8","Type":"ContainerStarted","Data":"4b4187aa05d60ca91fd5cf349b353949f50cc44c5221895dfe286963d741fc5f"} Dec 08 17:54:30 crc kubenswrapper[5118]: I1208 17:54:30.044860 5118 generic.go:358] "Generic (PLEG): container finished" podID="2d7eb264-8a42-4885-a9db-a693bf3911c8" containerID="fc16a3bd951e31b37a13f89e253101249ef7ee541bb8a6ea8977542ba9763237" exitCode=0 Dec 08 17:54:30 crc kubenswrapper[5118]: I1208 17:54:30.044965 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tkpnz" event={"ID":"2d7eb264-8a42-4885-a9db-a693bf3911c8","Type":"ContainerDied","Data":"fc16a3bd951e31b37a13f89e253101249ef7ee541bb8a6ea8977542ba9763237"} Dec 08 17:54:31 crc kubenswrapper[5118]: I1208 17:54:31.052085 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tkpnz" event={"ID":"2d7eb264-8a42-4885-a9db-a693bf3911c8","Type":"ContainerStarted","Data":"9ece5de64e48be57571942705a113e230c963fc6022f3522a1398b79f88bae9a"} Dec 08 17:54:31 crc kubenswrapper[5118]: I1208 17:54:31.962948 5118 patch_prober.go:28] interesting pod/machine-config-daemon-8vxnt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 17:54:31 crc kubenswrapper[5118]: I1208 17:54:31.963065 5118 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8vxnt" podUID="cee6a3dc-47d4-4996-9c78-cb6c6b626d71" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 17:54:31 crc kubenswrapper[5118]: I1208 17:54:31.963123 5118 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8vxnt" Dec 08 17:54:31 crc kubenswrapper[5118]: I1208 17:54:31.963972 5118 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e127f5f6ea947945bd90450d12f167e6419e8af6b0458b462fdc7e8064751458"} pod="openshift-machine-config-operator/machine-config-daemon-8vxnt" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 08 17:54:31 crc kubenswrapper[5118]: I1208 17:54:31.964051 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8vxnt" podUID="cee6a3dc-47d4-4996-9c78-cb6c6b626d71" containerName="machine-config-daemon" containerID="cri-o://e127f5f6ea947945bd90450d12f167e6419e8af6b0458b462fdc7e8064751458" gracePeriod=600 Dec 08 17:54:32 crc kubenswrapper[5118]: I1208 17:54:32.061233 5118 generic.go:358] "Generic (PLEG): container finished" podID="2d7eb264-8a42-4885-a9db-a693bf3911c8" containerID="9ece5de64e48be57571942705a113e230c963fc6022f3522a1398b79f88bae9a" exitCode=0 Dec 08 17:54:32 crc kubenswrapper[5118]: I1208 17:54:32.061365 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tkpnz" event={"ID":"2d7eb264-8a42-4885-a9db-a693bf3911c8","Type":"ContainerDied","Data":"9ece5de64e48be57571942705a113e230c963fc6022f3522a1398b79f88bae9a"} Dec 08 17:54:33 crc kubenswrapper[5118]: I1208 17:54:33.072574 5118 generic.go:358] "Generic (PLEG): container finished" podID="cee6a3dc-47d4-4996-9c78-cb6c6b626d71" containerID="e127f5f6ea947945bd90450d12f167e6419e8af6b0458b462fdc7e8064751458" exitCode=0 Dec 08 17:54:33 crc kubenswrapper[5118]: I1208 17:54:33.072621 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8vxnt" event={"ID":"cee6a3dc-47d4-4996-9c78-cb6c6b626d71","Type":"ContainerDied","Data":"e127f5f6ea947945bd90450d12f167e6419e8af6b0458b462fdc7e8064751458"} Dec 08 17:54:33 crc kubenswrapper[5118]: I1208 17:54:33.073050 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8vxnt" event={"ID":"cee6a3dc-47d4-4996-9c78-cb6c6b626d71","Type":"ContainerStarted","Data":"ba4ed75eee971f8ac62b8cc0f3802e18dd9cabd36e3862daae0b5ce56bd2f691"} Dec 08 17:54:33 crc kubenswrapper[5118]: I1208 17:54:33.073075 5118 scope.go:117] "RemoveContainer" containerID="b0ca934293bb401de268428d32fee96419e1934766145fbcb973b04a905f6519" Dec 08 17:54:33 crc kubenswrapper[5118]: I1208 17:54:33.077042 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tkpnz" event={"ID":"2d7eb264-8a42-4885-a9db-a693bf3911c8","Type":"ContainerStarted","Data":"9e33e67771c8440713224b46481c649e24d43ffefd2cbe3373d6854054cd0e3a"} Dec 08 17:54:33 crc kubenswrapper[5118]: I1208 17:54:33.119199 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-tkpnz" podStartSLOduration=4.39727987 podStartE2EDuration="5.119178938s" podCreationTimestamp="2025-12-08 17:54:28 +0000 UTC" firstStartedPulling="2025-12-08 17:54:30.04583196 +0000 UTC m=+726.947156064" lastFinishedPulling="2025-12-08 17:54:30.767731038 +0000 UTC m=+727.669055132" observedRunningTime="2025-12-08 17:54:33.117440762 +0000 UTC m=+730.018764856" watchObservedRunningTime="2025-12-08 17:54:33.119178938 +0000 UTC m=+730.020503042" Dec 08 17:54:38 crc kubenswrapper[5118]: I1208 17:54:38.786610 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-tkpnz" Dec 08 17:54:38 crc kubenswrapper[5118]: I1208 17:54:38.788209 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-tkpnz" Dec 08 17:54:38 crc kubenswrapper[5118]: I1208 17:54:38.853621 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-tkpnz" Dec 08 17:54:39 crc kubenswrapper[5118]: I1208 17:54:39.179246 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-tkpnz" Dec 08 17:54:39 crc kubenswrapper[5118]: I1208 17:54:39.233616 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tkpnz"] Dec 08 17:54:41 crc kubenswrapper[5118]: I1208 17:54:41.135050 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-tkpnz" podUID="2d7eb264-8a42-4885-a9db-a693bf3911c8" containerName="registry-server" containerID="cri-o://9e33e67771c8440713224b46481c649e24d43ffefd2cbe3373d6854054cd0e3a" gracePeriod=2 Dec 08 17:54:41 crc kubenswrapper[5118]: I1208 17:54:41.508062 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-hl4hq"] Dec 08 17:54:41 crc kubenswrapper[5118]: I1208 17:54:41.527556 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tkpnz" Dec 08 17:54:41 crc kubenswrapper[5118]: I1208 17:54:41.660239 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hl4hq"] Dec 08 17:54:41 crc kubenswrapper[5118]: I1208 17:54:41.660392 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hl4hq" Dec 08 17:54:41 crc kubenswrapper[5118]: I1208 17:54:41.660716 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d7eb264-8a42-4885-a9db-a693bf3911c8-catalog-content\") pod \"2d7eb264-8a42-4885-a9db-a693bf3911c8\" (UID: \"2d7eb264-8a42-4885-a9db-a693bf3911c8\") " Dec 08 17:54:41 crc kubenswrapper[5118]: I1208 17:54:41.660836 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lppnh\" (UniqueName: \"kubernetes.io/projected/2d7eb264-8a42-4885-a9db-a693bf3911c8-kube-api-access-lppnh\") pod \"2d7eb264-8a42-4885-a9db-a693bf3911c8\" (UID: \"2d7eb264-8a42-4885-a9db-a693bf3911c8\") " Dec 08 17:54:41 crc kubenswrapper[5118]: I1208 17:54:41.661004 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d7eb264-8a42-4885-a9db-a693bf3911c8-utilities\") pod \"2d7eb264-8a42-4885-a9db-a693bf3911c8\" (UID: \"2d7eb264-8a42-4885-a9db-a693bf3911c8\") " Dec 08 17:54:41 crc kubenswrapper[5118]: I1208 17:54:41.662126 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d7eb264-8a42-4885-a9db-a693bf3911c8-utilities" (OuterVolumeSpecName: "utilities") pod "2d7eb264-8a42-4885-a9db-a693bf3911c8" (UID: "2d7eb264-8a42-4885-a9db-a693bf3911c8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:54:41 crc kubenswrapper[5118]: I1208 17:54:41.676313 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d7eb264-8a42-4885-a9db-a693bf3911c8-kube-api-access-lppnh" (OuterVolumeSpecName: "kube-api-access-lppnh") pod "2d7eb264-8a42-4885-a9db-a693bf3911c8" (UID: "2d7eb264-8a42-4885-a9db-a693bf3911c8"). InnerVolumeSpecName "kube-api-access-lppnh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:54:41 crc kubenswrapper[5118]: I1208 17:54:41.693603 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d7eb264-8a42-4885-a9db-a693bf3911c8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2d7eb264-8a42-4885-a9db-a693bf3911c8" (UID: "2d7eb264-8a42-4885-a9db-a693bf3911c8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:54:41 crc kubenswrapper[5118]: I1208 17:54:41.761826 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lztxs\" (UniqueName: \"kubernetes.io/projected/3cb8fcb4-9838-4dd2-93a0-5bb860fd915b-kube-api-access-lztxs\") pod \"redhat-operators-hl4hq\" (UID: \"3cb8fcb4-9838-4dd2-93a0-5bb860fd915b\") " pod="openshift-marketplace/redhat-operators-hl4hq" Dec 08 17:54:41 crc kubenswrapper[5118]: I1208 17:54:41.761935 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3cb8fcb4-9838-4dd2-93a0-5bb860fd915b-utilities\") pod \"redhat-operators-hl4hq\" (UID: \"3cb8fcb4-9838-4dd2-93a0-5bb860fd915b\") " pod="openshift-marketplace/redhat-operators-hl4hq" Dec 08 17:54:41 crc kubenswrapper[5118]: I1208 17:54:41.761990 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3cb8fcb4-9838-4dd2-93a0-5bb860fd915b-catalog-content\") pod \"redhat-operators-hl4hq\" (UID: \"3cb8fcb4-9838-4dd2-93a0-5bb860fd915b\") " pod="openshift-marketplace/redhat-operators-hl4hq" Dec 08 17:54:41 crc kubenswrapper[5118]: I1208 17:54:41.762235 5118 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d7eb264-8a42-4885-a9db-a693bf3911c8-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 17:54:41 crc kubenswrapper[5118]: I1208 17:54:41.762276 5118 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d7eb264-8a42-4885-a9db-a693bf3911c8-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 17:54:41 crc kubenswrapper[5118]: I1208 17:54:41.762287 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lppnh\" (UniqueName: \"kubernetes.io/projected/2d7eb264-8a42-4885-a9db-a693bf3911c8-kube-api-access-lppnh\") on node \"crc\" DevicePath \"\"" Dec 08 17:54:41 crc kubenswrapper[5118]: I1208 17:54:41.863462 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lztxs\" (UniqueName: \"kubernetes.io/projected/3cb8fcb4-9838-4dd2-93a0-5bb860fd915b-kube-api-access-lztxs\") pod \"redhat-operators-hl4hq\" (UID: \"3cb8fcb4-9838-4dd2-93a0-5bb860fd915b\") " pod="openshift-marketplace/redhat-operators-hl4hq" Dec 08 17:54:41 crc kubenswrapper[5118]: I1208 17:54:41.863536 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3cb8fcb4-9838-4dd2-93a0-5bb860fd915b-utilities\") pod \"redhat-operators-hl4hq\" (UID: \"3cb8fcb4-9838-4dd2-93a0-5bb860fd915b\") " pod="openshift-marketplace/redhat-operators-hl4hq" Dec 08 17:54:41 crc kubenswrapper[5118]: I1208 17:54:41.863571 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3cb8fcb4-9838-4dd2-93a0-5bb860fd915b-catalog-content\") pod \"redhat-operators-hl4hq\" (UID: \"3cb8fcb4-9838-4dd2-93a0-5bb860fd915b\") " pod="openshift-marketplace/redhat-operators-hl4hq" Dec 08 17:54:41 crc kubenswrapper[5118]: I1208 17:54:41.863999 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3cb8fcb4-9838-4dd2-93a0-5bb860fd915b-catalog-content\") pod \"redhat-operators-hl4hq\" (UID: \"3cb8fcb4-9838-4dd2-93a0-5bb860fd915b\") " pod="openshift-marketplace/redhat-operators-hl4hq" Dec 08 17:54:41 crc kubenswrapper[5118]: I1208 17:54:41.864074 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3cb8fcb4-9838-4dd2-93a0-5bb860fd915b-utilities\") pod \"redhat-operators-hl4hq\" (UID: \"3cb8fcb4-9838-4dd2-93a0-5bb860fd915b\") " pod="openshift-marketplace/redhat-operators-hl4hq" Dec 08 17:54:41 crc kubenswrapper[5118]: I1208 17:54:41.879451 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lztxs\" (UniqueName: \"kubernetes.io/projected/3cb8fcb4-9838-4dd2-93a0-5bb860fd915b-kube-api-access-lztxs\") pod \"redhat-operators-hl4hq\" (UID: \"3cb8fcb4-9838-4dd2-93a0-5bb860fd915b\") " pod="openshift-marketplace/redhat-operators-hl4hq" Dec 08 17:54:42 crc kubenswrapper[5118]: I1208 17:54:42.002959 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hl4hq" Dec 08 17:54:42 crc kubenswrapper[5118]: I1208 17:54:42.150643 5118 generic.go:358] "Generic (PLEG): container finished" podID="2d7eb264-8a42-4885-a9db-a693bf3911c8" containerID="9e33e67771c8440713224b46481c649e24d43ffefd2cbe3373d6854054cd0e3a" exitCode=0 Dec 08 17:54:42 crc kubenswrapper[5118]: I1208 17:54:42.150682 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tkpnz" event={"ID":"2d7eb264-8a42-4885-a9db-a693bf3911c8","Type":"ContainerDied","Data":"9e33e67771c8440713224b46481c649e24d43ffefd2cbe3373d6854054cd0e3a"} Dec 08 17:54:42 crc kubenswrapper[5118]: I1208 17:54:42.150942 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tkpnz" event={"ID":"2d7eb264-8a42-4885-a9db-a693bf3911c8","Type":"ContainerDied","Data":"4b4187aa05d60ca91fd5cf349b353949f50cc44c5221895dfe286963d741fc5f"} Dec 08 17:54:42 crc kubenswrapper[5118]: I1208 17:54:42.150769 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tkpnz" Dec 08 17:54:42 crc kubenswrapper[5118]: I1208 17:54:42.151009 5118 scope.go:117] "RemoveContainer" containerID="9e33e67771c8440713224b46481c649e24d43ffefd2cbe3373d6854054cd0e3a" Dec 08 17:54:42 crc kubenswrapper[5118]: I1208 17:54:42.173091 5118 scope.go:117] "RemoveContainer" containerID="9ece5de64e48be57571942705a113e230c963fc6022f3522a1398b79f88bae9a" Dec 08 17:54:42 crc kubenswrapper[5118]: I1208 17:54:42.188416 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tkpnz"] Dec 08 17:54:42 crc kubenswrapper[5118]: I1208 17:54:42.191814 5118 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-tkpnz"] Dec 08 17:54:42 crc kubenswrapper[5118]: I1208 17:54:42.203707 5118 scope.go:117] "RemoveContainer" containerID="fc16a3bd951e31b37a13f89e253101249ef7ee541bb8a6ea8977542ba9763237" Dec 08 17:54:42 crc kubenswrapper[5118]: I1208 17:54:42.217726 5118 scope.go:117] "RemoveContainer" containerID="9e33e67771c8440713224b46481c649e24d43ffefd2cbe3373d6854054cd0e3a" Dec 08 17:54:42 crc kubenswrapper[5118]: E1208 17:54:42.218248 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e33e67771c8440713224b46481c649e24d43ffefd2cbe3373d6854054cd0e3a\": container with ID starting with 9e33e67771c8440713224b46481c649e24d43ffefd2cbe3373d6854054cd0e3a not found: ID does not exist" containerID="9e33e67771c8440713224b46481c649e24d43ffefd2cbe3373d6854054cd0e3a" Dec 08 17:54:42 crc kubenswrapper[5118]: I1208 17:54:42.218278 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e33e67771c8440713224b46481c649e24d43ffefd2cbe3373d6854054cd0e3a"} err="failed to get container status \"9e33e67771c8440713224b46481c649e24d43ffefd2cbe3373d6854054cd0e3a\": rpc error: code = NotFound desc = could not find container \"9e33e67771c8440713224b46481c649e24d43ffefd2cbe3373d6854054cd0e3a\": container with ID starting with 9e33e67771c8440713224b46481c649e24d43ffefd2cbe3373d6854054cd0e3a not found: ID does not exist" Dec 08 17:54:42 crc kubenswrapper[5118]: I1208 17:54:42.218297 5118 scope.go:117] "RemoveContainer" containerID="9ece5de64e48be57571942705a113e230c963fc6022f3522a1398b79f88bae9a" Dec 08 17:54:42 crc kubenswrapper[5118]: E1208 17:54:42.218520 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ece5de64e48be57571942705a113e230c963fc6022f3522a1398b79f88bae9a\": container with ID starting with 9ece5de64e48be57571942705a113e230c963fc6022f3522a1398b79f88bae9a not found: ID does not exist" containerID="9ece5de64e48be57571942705a113e230c963fc6022f3522a1398b79f88bae9a" Dec 08 17:54:42 crc kubenswrapper[5118]: I1208 17:54:42.218537 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ece5de64e48be57571942705a113e230c963fc6022f3522a1398b79f88bae9a"} err="failed to get container status \"9ece5de64e48be57571942705a113e230c963fc6022f3522a1398b79f88bae9a\": rpc error: code = NotFound desc = could not find container \"9ece5de64e48be57571942705a113e230c963fc6022f3522a1398b79f88bae9a\": container with ID starting with 9ece5de64e48be57571942705a113e230c963fc6022f3522a1398b79f88bae9a not found: ID does not exist" Dec 08 17:54:42 crc kubenswrapper[5118]: I1208 17:54:42.218549 5118 scope.go:117] "RemoveContainer" containerID="fc16a3bd951e31b37a13f89e253101249ef7ee541bb8a6ea8977542ba9763237" Dec 08 17:54:42 crc kubenswrapper[5118]: E1208 17:54:42.218731 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc16a3bd951e31b37a13f89e253101249ef7ee541bb8a6ea8977542ba9763237\": container with ID starting with fc16a3bd951e31b37a13f89e253101249ef7ee541bb8a6ea8977542ba9763237 not found: ID does not exist" containerID="fc16a3bd951e31b37a13f89e253101249ef7ee541bb8a6ea8977542ba9763237" Dec 08 17:54:42 crc kubenswrapper[5118]: I1208 17:54:42.218747 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc16a3bd951e31b37a13f89e253101249ef7ee541bb8a6ea8977542ba9763237"} err="failed to get container status \"fc16a3bd951e31b37a13f89e253101249ef7ee541bb8a6ea8977542ba9763237\": rpc error: code = NotFound desc = could not find container \"fc16a3bd951e31b37a13f89e253101249ef7ee541bb8a6ea8977542ba9763237\": container with ID starting with fc16a3bd951e31b37a13f89e253101249ef7ee541bb8a6ea8977542ba9763237 not found: ID does not exist" Dec 08 17:54:42 crc kubenswrapper[5118]: I1208 17:54:42.407788 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hl4hq"] Dec 08 17:54:43 crc kubenswrapper[5118]: I1208 17:54:43.162696 5118 generic.go:358] "Generic (PLEG): container finished" podID="3cb8fcb4-9838-4dd2-93a0-5bb860fd915b" containerID="9628e09e3160b0e6d8dc51a9683f96edd9bbadb5229724b8477aef2ec1287475" exitCode=0 Dec 08 17:54:43 crc kubenswrapper[5118]: I1208 17:54:43.162822 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hl4hq" event={"ID":"3cb8fcb4-9838-4dd2-93a0-5bb860fd915b","Type":"ContainerDied","Data":"9628e09e3160b0e6d8dc51a9683f96edd9bbadb5229724b8477aef2ec1287475"} Dec 08 17:54:43 crc kubenswrapper[5118]: I1208 17:54:43.162859 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hl4hq" event={"ID":"3cb8fcb4-9838-4dd2-93a0-5bb860fd915b","Type":"ContainerStarted","Data":"24210ec6bd60a71aa153a7bd8e0b815977fd1af47e969217fdbff4efc408db1d"} Dec 08 17:54:43 crc kubenswrapper[5118]: I1208 17:54:43.437260 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d7eb264-8a42-4885-a9db-a693bf3911c8" path="/var/lib/kubelet/pods/2d7eb264-8a42-4885-a9db-a693bf3911c8/volumes" Dec 08 17:54:44 crc kubenswrapper[5118]: I1208 17:54:44.170861 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hl4hq" event={"ID":"3cb8fcb4-9838-4dd2-93a0-5bb860fd915b","Type":"ContainerStarted","Data":"3b06259c1dc4b7c5d0014655e176028d295d2ef1f8fb2b7d5951a7406ecdc49b"} Dec 08 17:54:45 crc kubenswrapper[5118]: I1208 17:54:45.181658 5118 generic.go:358] "Generic (PLEG): container finished" podID="3cb8fcb4-9838-4dd2-93a0-5bb860fd915b" containerID="3b06259c1dc4b7c5d0014655e176028d295d2ef1f8fb2b7d5951a7406ecdc49b" exitCode=0 Dec 08 17:54:45 crc kubenswrapper[5118]: I1208 17:54:45.181742 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hl4hq" event={"ID":"3cb8fcb4-9838-4dd2-93a0-5bb860fd915b","Type":"ContainerDied","Data":"3b06259c1dc4b7c5d0014655e176028d295d2ef1f8fb2b7d5951a7406ecdc49b"} Dec 08 17:54:46 crc kubenswrapper[5118]: I1208 17:54:46.187755 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hl4hq" event={"ID":"3cb8fcb4-9838-4dd2-93a0-5bb860fd915b","Type":"ContainerStarted","Data":"bcbbc9c55f76fb8eaa4c26b198a288e7c0128c4e3866fdaa039ade4d756dd1d6"} Dec 08 17:54:46 crc kubenswrapper[5118]: I1208 17:54:46.204487 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-hl4hq" podStartSLOduration=4.691750406 podStartE2EDuration="5.204466444s" podCreationTimestamp="2025-12-08 17:54:41 +0000 UTC" firstStartedPulling="2025-12-08 17:54:43.164339061 +0000 UTC m=+740.065663195" lastFinishedPulling="2025-12-08 17:54:43.677055099 +0000 UTC m=+740.578379233" observedRunningTime="2025-12-08 17:54:46.202151622 +0000 UTC m=+743.103475716" watchObservedRunningTime="2025-12-08 17:54:46.204466444 +0000 UTC m=+743.105790548" Dec 08 17:54:50 crc kubenswrapper[5118]: I1208 17:54:50.040076 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xp5vr"] Dec 08 17:54:50 crc kubenswrapper[5118]: I1208 17:54:50.041861 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-xp5vr" podUID="c9416e49-5134-45de-9eeb-a15be7fdbf63" containerName="registry-server" containerID="cri-o://0a7c833b3c15ad97f73ff137d4555fe46e5a82ea584c876d53847d21edb4686c" gracePeriod=30 Dec 08 17:54:50 crc kubenswrapper[5118]: I1208 17:54:50.227681 5118 generic.go:358] "Generic (PLEG): container finished" podID="c9416e49-5134-45de-9eeb-a15be7fdbf63" containerID="0a7c833b3c15ad97f73ff137d4555fe46e5a82ea584c876d53847d21edb4686c" exitCode=0 Dec 08 17:54:50 crc kubenswrapper[5118]: I1208 17:54:50.227768 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xp5vr" event={"ID":"c9416e49-5134-45de-9eeb-a15be7fdbf63","Type":"ContainerDied","Data":"0a7c833b3c15ad97f73ff137d4555fe46e5a82ea584c876d53847d21edb4686c"} Dec 08 17:54:50 crc kubenswrapper[5118]: I1208 17:54:50.362614 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xp5vr" Dec 08 17:54:50 crc kubenswrapper[5118]: I1208 17:54:50.383148 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9416e49-5134-45de-9eeb-a15be7fdbf63-utilities\") pod \"c9416e49-5134-45de-9eeb-a15be7fdbf63\" (UID: \"c9416e49-5134-45de-9eeb-a15be7fdbf63\") " Dec 08 17:54:50 crc kubenswrapper[5118]: I1208 17:54:50.383388 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9416e49-5134-45de-9eeb-a15be7fdbf63-catalog-content\") pod \"c9416e49-5134-45de-9eeb-a15be7fdbf63\" (UID: \"c9416e49-5134-45de-9eeb-a15be7fdbf63\") " Dec 08 17:54:50 crc kubenswrapper[5118]: I1208 17:54:50.384687 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jn9pg\" (UniqueName: \"kubernetes.io/projected/c9416e49-5134-45de-9eeb-a15be7fdbf63-kube-api-access-jn9pg\") pod \"c9416e49-5134-45de-9eeb-a15be7fdbf63\" (UID: \"c9416e49-5134-45de-9eeb-a15be7fdbf63\") " Dec 08 17:54:50 crc kubenswrapper[5118]: I1208 17:54:50.384978 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c9416e49-5134-45de-9eeb-a15be7fdbf63-utilities" (OuterVolumeSpecName: "utilities") pod "c9416e49-5134-45de-9eeb-a15be7fdbf63" (UID: "c9416e49-5134-45de-9eeb-a15be7fdbf63"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:54:50 crc kubenswrapper[5118]: I1208 17:54:50.385195 5118 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9416e49-5134-45de-9eeb-a15be7fdbf63-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 17:54:50 crc kubenswrapper[5118]: I1208 17:54:50.394402 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c9416e49-5134-45de-9eeb-a15be7fdbf63-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c9416e49-5134-45de-9eeb-a15be7fdbf63" (UID: "c9416e49-5134-45de-9eeb-a15be7fdbf63"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:54:50 crc kubenswrapper[5118]: I1208 17:54:50.395029 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9416e49-5134-45de-9eeb-a15be7fdbf63-kube-api-access-jn9pg" (OuterVolumeSpecName: "kube-api-access-jn9pg") pod "c9416e49-5134-45de-9eeb-a15be7fdbf63" (UID: "c9416e49-5134-45de-9eeb-a15be7fdbf63"). InnerVolumeSpecName "kube-api-access-jn9pg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:54:50 crc kubenswrapper[5118]: I1208 17:54:50.486039 5118 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9416e49-5134-45de-9eeb-a15be7fdbf63-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 17:54:50 crc kubenswrapper[5118]: I1208 17:54:50.486075 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jn9pg\" (UniqueName: \"kubernetes.io/projected/c9416e49-5134-45de-9eeb-a15be7fdbf63-kube-api-access-jn9pg\") on node \"crc\" DevicePath \"\"" Dec 08 17:54:51 crc kubenswrapper[5118]: I1208 17:54:51.192164 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-cmjbz"] Dec 08 17:54:51 crc kubenswrapper[5118]: I1208 17:54:51.192702 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2d7eb264-8a42-4885-a9db-a693bf3911c8" containerName="extract-content" Dec 08 17:54:51 crc kubenswrapper[5118]: I1208 17:54:51.192716 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d7eb264-8a42-4885-a9db-a693bf3911c8" containerName="extract-content" Dec 08 17:54:51 crc kubenswrapper[5118]: I1208 17:54:51.192727 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c9416e49-5134-45de-9eeb-a15be7fdbf63" containerName="extract-content" Dec 08 17:54:51 crc kubenswrapper[5118]: I1208 17:54:51.192733 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9416e49-5134-45de-9eeb-a15be7fdbf63" containerName="extract-content" Dec 08 17:54:51 crc kubenswrapper[5118]: I1208 17:54:51.192745 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c9416e49-5134-45de-9eeb-a15be7fdbf63" containerName="extract-utilities" Dec 08 17:54:51 crc kubenswrapper[5118]: I1208 17:54:51.192751 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9416e49-5134-45de-9eeb-a15be7fdbf63" containerName="extract-utilities" Dec 08 17:54:51 crc kubenswrapper[5118]: I1208 17:54:51.192766 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2d7eb264-8a42-4885-a9db-a693bf3911c8" containerName="extract-utilities" Dec 08 17:54:51 crc kubenswrapper[5118]: I1208 17:54:51.192772 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d7eb264-8a42-4885-a9db-a693bf3911c8" containerName="extract-utilities" Dec 08 17:54:51 crc kubenswrapper[5118]: I1208 17:54:51.192787 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c9416e49-5134-45de-9eeb-a15be7fdbf63" containerName="registry-server" Dec 08 17:54:51 crc kubenswrapper[5118]: I1208 17:54:51.192792 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9416e49-5134-45de-9eeb-a15be7fdbf63" containerName="registry-server" Dec 08 17:54:51 crc kubenswrapper[5118]: I1208 17:54:51.192800 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2d7eb264-8a42-4885-a9db-a693bf3911c8" containerName="registry-server" Dec 08 17:54:51 crc kubenswrapper[5118]: I1208 17:54:51.192805 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d7eb264-8a42-4885-a9db-a693bf3911c8" containerName="registry-server" Dec 08 17:54:51 crc kubenswrapper[5118]: I1208 17:54:51.192902 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="c9416e49-5134-45de-9eeb-a15be7fdbf63" containerName="registry-server" Dec 08 17:54:51 crc kubenswrapper[5118]: I1208 17:54:51.192919 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="2d7eb264-8a42-4885-a9db-a693bf3911c8" containerName="registry-server" Dec 08 17:54:51 crc kubenswrapper[5118]: I1208 17:54:51.199566 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-5d9d95bf5b-cmjbz" Dec 08 17:54:51 crc kubenswrapper[5118]: I1208 17:54:51.203069 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-cmjbz"] Dec 08 17:54:51 crc kubenswrapper[5118]: I1208 17:54:51.244592 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xp5vr" event={"ID":"c9416e49-5134-45de-9eeb-a15be7fdbf63","Type":"ContainerDied","Data":"c11f84302bfe3264cf3e55e89a65907964bdd273130b6ff7fe1c6969648837c5"} Dec 08 17:54:51 crc kubenswrapper[5118]: I1208 17:54:51.244681 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xp5vr" Dec 08 17:54:51 crc kubenswrapper[5118]: I1208 17:54:51.244688 5118 scope.go:117] "RemoveContainer" containerID="0a7c833b3c15ad97f73ff137d4555fe46e5a82ea584c876d53847d21edb4686c" Dec 08 17:54:51 crc kubenswrapper[5118]: I1208 17:54:51.269230 5118 scope.go:117] "RemoveContainer" containerID="b8e08b1bcf5296444229869ccbb1d5e6b8236afceeff928c15285308a30d17bb" Dec 08 17:54:51 crc kubenswrapper[5118]: I1208 17:54:51.284637 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xp5vr"] Dec 08 17:54:51 crc kubenswrapper[5118]: I1208 17:54:51.288432 5118 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-xp5vr"] Dec 08 17:54:51 crc kubenswrapper[5118]: I1208 17:54:51.296312 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/82c8be84-d9b0-44df-99be-57f994255a0b-registry-certificates\") pod \"image-registry-5d9d95bf5b-cmjbz\" (UID: \"82c8be84-d9b0-44df-99be-57f994255a0b\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-cmjbz" Dec 08 17:54:51 crc kubenswrapper[5118]: I1208 17:54:51.296351 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/82c8be84-d9b0-44df-99be-57f994255a0b-bound-sa-token\") pod \"image-registry-5d9d95bf5b-cmjbz\" (UID: \"82c8be84-d9b0-44df-99be-57f994255a0b\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-cmjbz" Dec 08 17:54:51 crc kubenswrapper[5118]: I1208 17:54:51.296388 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-5d9d95bf5b-cmjbz\" (UID: \"82c8be84-d9b0-44df-99be-57f994255a0b\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-cmjbz" Dec 08 17:54:51 crc kubenswrapper[5118]: I1208 17:54:51.296485 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/82c8be84-d9b0-44df-99be-57f994255a0b-registry-tls\") pod \"image-registry-5d9d95bf5b-cmjbz\" (UID: \"82c8be84-d9b0-44df-99be-57f994255a0b\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-cmjbz" Dec 08 17:54:51 crc kubenswrapper[5118]: I1208 17:54:51.296598 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/82c8be84-d9b0-44df-99be-57f994255a0b-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-cmjbz\" (UID: \"82c8be84-d9b0-44df-99be-57f994255a0b\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-cmjbz" Dec 08 17:54:51 crc kubenswrapper[5118]: I1208 17:54:51.296637 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/82c8be84-d9b0-44df-99be-57f994255a0b-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-cmjbz\" (UID: \"82c8be84-d9b0-44df-99be-57f994255a0b\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-cmjbz" Dec 08 17:54:51 crc kubenswrapper[5118]: I1208 17:54:51.296701 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/82c8be84-d9b0-44df-99be-57f994255a0b-trusted-ca\") pod \"image-registry-5d9d95bf5b-cmjbz\" (UID: \"82c8be84-d9b0-44df-99be-57f994255a0b\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-cmjbz" Dec 08 17:54:51 crc kubenswrapper[5118]: I1208 17:54:51.296857 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5z4t\" (UniqueName: \"kubernetes.io/projected/82c8be84-d9b0-44df-99be-57f994255a0b-kube-api-access-l5z4t\") pod \"image-registry-5d9d95bf5b-cmjbz\" (UID: \"82c8be84-d9b0-44df-99be-57f994255a0b\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-cmjbz" Dec 08 17:54:51 crc kubenswrapper[5118]: I1208 17:54:51.306462 5118 scope.go:117] "RemoveContainer" containerID="15abf83ac167b026161991b2c26d6b4469f123748db4a90d01e99f1268ea71ff" Dec 08 17:54:51 crc kubenswrapper[5118]: I1208 17:54:51.323991 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-5d9d95bf5b-cmjbz\" (UID: \"82c8be84-d9b0-44df-99be-57f994255a0b\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-cmjbz" Dec 08 17:54:51 crc kubenswrapper[5118]: I1208 17:54:51.397994 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l5z4t\" (UniqueName: \"kubernetes.io/projected/82c8be84-d9b0-44df-99be-57f994255a0b-kube-api-access-l5z4t\") pod \"image-registry-5d9d95bf5b-cmjbz\" (UID: \"82c8be84-d9b0-44df-99be-57f994255a0b\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-cmjbz" Dec 08 17:54:51 crc kubenswrapper[5118]: I1208 17:54:51.398067 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/82c8be84-d9b0-44df-99be-57f994255a0b-registry-certificates\") pod \"image-registry-5d9d95bf5b-cmjbz\" (UID: \"82c8be84-d9b0-44df-99be-57f994255a0b\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-cmjbz" Dec 08 17:54:51 crc kubenswrapper[5118]: I1208 17:54:51.398090 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/82c8be84-d9b0-44df-99be-57f994255a0b-bound-sa-token\") pod \"image-registry-5d9d95bf5b-cmjbz\" (UID: \"82c8be84-d9b0-44df-99be-57f994255a0b\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-cmjbz" Dec 08 17:54:51 crc kubenswrapper[5118]: I1208 17:54:51.398146 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/82c8be84-d9b0-44df-99be-57f994255a0b-registry-tls\") pod \"image-registry-5d9d95bf5b-cmjbz\" (UID: \"82c8be84-d9b0-44df-99be-57f994255a0b\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-cmjbz" Dec 08 17:54:51 crc kubenswrapper[5118]: I1208 17:54:51.398175 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/82c8be84-d9b0-44df-99be-57f994255a0b-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-cmjbz\" (UID: \"82c8be84-d9b0-44df-99be-57f994255a0b\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-cmjbz" Dec 08 17:54:51 crc kubenswrapper[5118]: I1208 17:54:51.398195 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/82c8be84-d9b0-44df-99be-57f994255a0b-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-cmjbz\" (UID: \"82c8be84-d9b0-44df-99be-57f994255a0b\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-cmjbz" Dec 08 17:54:51 crc kubenswrapper[5118]: I1208 17:54:51.398228 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/82c8be84-d9b0-44df-99be-57f994255a0b-trusted-ca\") pod \"image-registry-5d9d95bf5b-cmjbz\" (UID: \"82c8be84-d9b0-44df-99be-57f994255a0b\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-cmjbz" Dec 08 17:54:51 crc kubenswrapper[5118]: I1208 17:54:51.399622 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/82c8be84-d9b0-44df-99be-57f994255a0b-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-cmjbz\" (UID: \"82c8be84-d9b0-44df-99be-57f994255a0b\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-cmjbz" Dec 08 17:54:51 crc kubenswrapper[5118]: I1208 17:54:51.399811 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/82c8be84-d9b0-44df-99be-57f994255a0b-trusted-ca\") pod \"image-registry-5d9d95bf5b-cmjbz\" (UID: \"82c8be84-d9b0-44df-99be-57f994255a0b\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-cmjbz" Dec 08 17:54:51 crc kubenswrapper[5118]: I1208 17:54:51.400405 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/82c8be84-d9b0-44df-99be-57f994255a0b-registry-certificates\") pod \"image-registry-5d9d95bf5b-cmjbz\" (UID: \"82c8be84-d9b0-44df-99be-57f994255a0b\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-cmjbz" Dec 08 17:54:51 crc kubenswrapper[5118]: I1208 17:54:51.403528 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/82c8be84-d9b0-44df-99be-57f994255a0b-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-cmjbz\" (UID: \"82c8be84-d9b0-44df-99be-57f994255a0b\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-cmjbz" Dec 08 17:54:51 crc kubenswrapper[5118]: I1208 17:54:51.405275 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/82c8be84-d9b0-44df-99be-57f994255a0b-registry-tls\") pod \"image-registry-5d9d95bf5b-cmjbz\" (UID: \"82c8be84-d9b0-44df-99be-57f994255a0b\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-cmjbz" Dec 08 17:54:51 crc kubenswrapper[5118]: I1208 17:54:51.417648 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-l5z4t\" (UniqueName: \"kubernetes.io/projected/82c8be84-d9b0-44df-99be-57f994255a0b-kube-api-access-l5z4t\") pod \"image-registry-5d9d95bf5b-cmjbz\" (UID: \"82c8be84-d9b0-44df-99be-57f994255a0b\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-cmjbz" Dec 08 17:54:51 crc kubenswrapper[5118]: I1208 17:54:51.419625 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/82c8be84-d9b0-44df-99be-57f994255a0b-bound-sa-token\") pod \"image-registry-5d9d95bf5b-cmjbz\" (UID: \"82c8be84-d9b0-44df-99be-57f994255a0b\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-cmjbz" Dec 08 17:54:51 crc kubenswrapper[5118]: I1208 17:54:51.436611 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9416e49-5134-45de-9eeb-a15be7fdbf63" path="/var/lib/kubelet/pods/c9416e49-5134-45de-9eeb-a15be7fdbf63/volumes" Dec 08 17:54:51 crc kubenswrapper[5118]: I1208 17:54:51.513233 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-5d9d95bf5b-cmjbz" Dec 08 17:54:51 crc kubenswrapper[5118]: I1208 17:54:51.705455 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-cmjbz"] Dec 08 17:54:52 crc kubenswrapper[5118]: I1208 17:54:52.004027 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-hl4hq" Dec 08 17:54:52 crc kubenswrapper[5118]: I1208 17:54:52.004153 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-hl4hq" Dec 08 17:54:52 crc kubenswrapper[5118]: I1208 17:54:52.043520 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-hl4hq" Dec 08 17:54:52 crc kubenswrapper[5118]: I1208 17:54:52.252932 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-5d9d95bf5b-cmjbz" event={"ID":"82c8be84-d9b0-44df-99be-57f994255a0b","Type":"ContainerStarted","Data":"235452b872161231c674f8d4c5ef793b3ec64fc977e9f9f96d2b4a4654304907"} Dec 08 17:54:52 crc kubenswrapper[5118]: I1208 17:54:52.252993 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-5d9d95bf5b-cmjbz" event={"ID":"82c8be84-d9b0-44df-99be-57f994255a0b","Type":"ContainerStarted","Data":"42b4b9839c709fcee00bdae06ff7f011cff52246bc72cbd262f9589b280f7e98"} Dec 08 17:54:52 crc kubenswrapper[5118]: I1208 17:54:52.253060 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-image-registry/image-registry-5d9d95bf5b-cmjbz" Dec 08 17:54:52 crc kubenswrapper[5118]: I1208 17:54:52.278846 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-5d9d95bf5b-cmjbz" podStartSLOduration=1.278824613 podStartE2EDuration="1.278824613s" podCreationTimestamp="2025-12-08 17:54:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:54:52.275719719 +0000 UTC m=+749.177043843" watchObservedRunningTime="2025-12-08 17:54:52.278824613 +0000 UTC m=+749.180148717" Dec 08 17:54:52 crc kubenswrapper[5118]: I1208 17:54:52.295829 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-hl4hq" Dec 08 17:54:54 crc kubenswrapper[5118]: I1208 17:54:54.681556 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hl4hq"] Dec 08 17:54:54 crc kubenswrapper[5118]: I1208 17:54:54.683227 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-hl4hq" podUID="3cb8fcb4-9838-4dd2-93a0-5bb860fd915b" containerName="registry-server" containerID="cri-o://bcbbc9c55f76fb8eaa4c26b198a288e7c0128c4e3866fdaa039ade4d756dd1d6" gracePeriod=2 Dec 08 17:54:56 crc kubenswrapper[5118]: I1208 17:54:56.139743 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210hkpv5"] Dec 08 17:54:57 crc kubenswrapper[5118]: I1208 17:54:57.526914 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210hkpv5"] Dec 08 17:54:57 crc kubenswrapper[5118]: I1208 17:54:57.527286 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210hkpv5" Dec 08 17:54:57 crc kubenswrapper[5118]: I1208 17:54:57.531861 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"default-dockercfg-b2ccr\"" Dec 08 17:54:57 crc kubenswrapper[5118]: I1208 17:54:57.587907 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8d941e2a-672c-4bb7-b8fc-314ecbcf7781-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210hkpv5\" (UID: \"8d941e2a-672c-4bb7-b8fc-314ecbcf7781\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210hkpv5" Dec 08 17:54:57 crc kubenswrapper[5118]: I1208 17:54:57.588042 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8d941e2a-672c-4bb7-b8fc-314ecbcf7781-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210hkpv5\" (UID: \"8d941e2a-672c-4bb7-b8fc-314ecbcf7781\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210hkpv5" Dec 08 17:54:57 crc kubenswrapper[5118]: I1208 17:54:57.588087 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9t6tg\" (UniqueName: \"kubernetes.io/projected/8d941e2a-672c-4bb7-b8fc-314ecbcf7781-kube-api-access-9t6tg\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210hkpv5\" (UID: \"8d941e2a-672c-4bb7-b8fc-314ecbcf7781\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210hkpv5" Dec 08 17:54:57 crc kubenswrapper[5118]: I1208 17:54:57.690700 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8d941e2a-672c-4bb7-b8fc-314ecbcf7781-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210hkpv5\" (UID: \"8d941e2a-672c-4bb7-b8fc-314ecbcf7781\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210hkpv5" Dec 08 17:54:57 crc kubenswrapper[5118]: I1208 17:54:57.690070 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8d941e2a-672c-4bb7-b8fc-314ecbcf7781-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210hkpv5\" (UID: \"8d941e2a-672c-4bb7-b8fc-314ecbcf7781\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210hkpv5" Dec 08 17:54:57 crc kubenswrapper[5118]: I1208 17:54:57.690842 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9t6tg\" (UniqueName: \"kubernetes.io/projected/8d941e2a-672c-4bb7-b8fc-314ecbcf7781-kube-api-access-9t6tg\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210hkpv5\" (UID: \"8d941e2a-672c-4bb7-b8fc-314ecbcf7781\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210hkpv5" Dec 08 17:54:57 crc kubenswrapper[5118]: I1208 17:54:57.691358 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8d941e2a-672c-4bb7-b8fc-314ecbcf7781-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210hkpv5\" (UID: \"8d941e2a-672c-4bb7-b8fc-314ecbcf7781\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210hkpv5" Dec 08 17:54:57 crc kubenswrapper[5118]: I1208 17:54:57.691860 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8d941e2a-672c-4bb7-b8fc-314ecbcf7781-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210hkpv5\" (UID: \"8d941e2a-672c-4bb7-b8fc-314ecbcf7781\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210hkpv5" Dec 08 17:54:57 crc kubenswrapper[5118]: I1208 17:54:57.715229 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9t6tg\" (UniqueName: \"kubernetes.io/projected/8d941e2a-672c-4bb7-b8fc-314ecbcf7781-kube-api-access-9t6tg\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210hkpv5\" (UID: \"8d941e2a-672c-4bb7-b8fc-314ecbcf7781\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210hkpv5" Dec 08 17:54:57 crc kubenswrapper[5118]: I1208 17:54:57.858270 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210hkpv5" Dec 08 17:54:58 crc kubenswrapper[5118]: I1208 17:54:58.066573 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210hkpv5"] Dec 08 17:54:58 crc kubenswrapper[5118]: W1208 17:54:58.074603 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8d941e2a_672c_4bb7_b8fc_314ecbcf7781.slice/crio-2a7c27be34f0ea80d0d8deca84039524de988c06829e6374708a625c6a090285 WatchSource:0}: Error finding container 2a7c27be34f0ea80d0d8deca84039524de988c06829e6374708a625c6a090285: Status 404 returned error can't find the container with id 2a7c27be34f0ea80d0d8deca84039524de988c06829e6374708a625c6a090285 Dec 08 17:54:58 crc kubenswrapper[5118]: I1208 17:54:58.290302 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210hkpv5" event={"ID":"8d941e2a-672c-4bb7-b8fc-314ecbcf7781","Type":"ContainerStarted","Data":"2a7c27be34f0ea80d0d8deca84039524de988c06829e6374708a625c6a090285"} Dec 08 17:54:59 crc kubenswrapper[5118]: I1208 17:54:59.260171 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hl4hq" Dec 08 17:54:59 crc kubenswrapper[5118]: I1208 17:54:59.308259 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210hkpv5" event={"ID":"8d941e2a-672c-4bb7-b8fc-314ecbcf7781","Type":"ContainerStarted","Data":"d79b2aa47d7a270fcb98edee8cefb939c8b23c89c125dfcd6df3944030747446"} Dec 08 17:54:59 crc kubenswrapper[5118]: I1208 17:54:59.314667 5118 generic.go:358] "Generic (PLEG): container finished" podID="3cb8fcb4-9838-4dd2-93a0-5bb860fd915b" containerID="bcbbc9c55f76fb8eaa4c26b198a288e7c0128c4e3866fdaa039ade4d756dd1d6" exitCode=0 Dec 08 17:54:59 crc kubenswrapper[5118]: I1208 17:54:59.314818 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hl4hq" event={"ID":"3cb8fcb4-9838-4dd2-93a0-5bb860fd915b","Type":"ContainerDied","Data":"bcbbc9c55f76fb8eaa4c26b198a288e7c0128c4e3866fdaa039ade4d756dd1d6"} Dec 08 17:54:59 crc kubenswrapper[5118]: I1208 17:54:59.314855 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hl4hq" event={"ID":"3cb8fcb4-9838-4dd2-93a0-5bb860fd915b","Type":"ContainerDied","Data":"24210ec6bd60a71aa153a7bd8e0b815977fd1af47e969217fdbff4efc408db1d"} Dec 08 17:54:59 crc kubenswrapper[5118]: I1208 17:54:59.314897 5118 scope.go:117] "RemoveContainer" containerID="bcbbc9c55f76fb8eaa4c26b198a288e7c0128c4e3866fdaa039ade4d756dd1d6" Dec 08 17:54:59 crc kubenswrapper[5118]: I1208 17:54:59.315044 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hl4hq" Dec 08 17:54:59 crc kubenswrapper[5118]: I1208 17:54:59.323486 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lztxs\" (UniqueName: \"kubernetes.io/projected/3cb8fcb4-9838-4dd2-93a0-5bb860fd915b-kube-api-access-lztxs\") pod \"3cb8fcb4-9838-4dd2-93a0-5bb860fd915b\" (UID: \"3cb8fcb4-9838-4dd2-93a0-5bb860fd915b\") " Dec 08 17:54:59 crc kubenswrapper[5118]: I1208 17:54:59.323527 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3cb8fcb4-9838-4dd2-93a0-5bb860fd915b-catalog-content\") pod \"3cb8fcb4-9838-4dd2-93a0-5bb860fd915b\" (UID: \"3cb8fcb4-9838-4dd2-93a0-5bb860fd915b\") " Dec 08 17:54:59 crc kubenswrapper[5118]: I1208 17:54:59.323553 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3cb8fcb4-9838-4dd2-93a0-5bb860fd915b-utilities\") pod \"3cb8fcb4-9838-4dd2-93a0-5bb860fd915b\" (UID: \"3cb8fcb4-9838-4dd2-93a0-5bb860fd915b\") " Dec 08 17:54:59 crc kubenswrapper[5118]: I1208 17:54:59.327334 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3cb8fcb4-9838-4dd2-93a0-5bb860fd915b-utilities" (OuterVolumeSpecName: "utilities") pod "3cb8fcb4-9838-4dd2-93a0-5bb860fd915b" (UID: "3cb8fcb4-9838-4dd2-93a0-5bb860fd915b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:54:59 crc kubenswrapper[5118]: I1208 17:54:59.334453 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb8fcb4-9838-4dd2-93a0-5bb860fd915b-kube-api-access-lztxs" (OuterVolumeSpecName: "kube-api-access-lztxs") pod "3cb8fcb4-9838-4dd2-93a0-5bb860fd915b" (UID: "3cb8fcb4-9838-4dd2-93a0-5bb860fd915b"). InnerVolumeSpecName "kube-api-access-lztxs". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:54:59 crc kubenswrapper[5118]: I1208 17:54:59.388843 5118 scope.go:117] "RemoveContainer" containerID="3b06259c1dc4b7c5d0014655e176028d295d2ef1f8fb2b7d5951a7406ecdc49b" Dec 08 17:54:59 crc kubenswrapper[5118]: I1208 17:54:59.405188 5118 scope.go:117] "RemoveContainer" containerID="9628e09e3160b0e6d8dc51a9683f96edd9bbadb5229724b8477aef2ec1287475" Dec 08 17:54:59 crc kubenswrapper[5118]: I1208 17:54:59.415795 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3cb8fcb4-9838-4dd2-93a0-5bb860fd915b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3cb8fcb4-9838-4dd2-93a0-5bb860fd915b" (UID: "3cb8fcb4-9838-4dd2-93a0-5bb860fd915b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:54:59 crc kubenswrapper[5118]: I1208 17:54:59.425248 5118 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3cb8fcb4-9838-4dd2-93a0-5bb860fd915b-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 17:54:59 crc kubenswrapper[5118]: I1208 17:54:59.425294 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lztxs\" (UniqueName: \"kubernetes.io/projected/3cb8fcb4-9838-4dd2-93a0-5bb860fd915b-kube-api-access-lztxs\") on node \"crc\" DevicePath \"\"" Dec 08 17:54:59 crc kubenswrapper[5118]: I1208 17:54:59.425307 5118 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3cb8fcb4-9838-4dd2-93a0-5bb860fd915b-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 17:54:59 crc kubenswrapper[5118]: I1208 17:54:59.435903 5118 scope.go:117] "RemoveContainer" containerID="bcbbc9c55f76fb8eaa4c26b198a288e7c0128c4e3866fdaa039ade4d756dd1d6" Dec 08 17:54:59 crc kubenswrapper[5118]: E1208 17:54:59.436318 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bcbbc9c55f76fb8eaa4c26b198a288e7c0128c4e3866fdaa039ade4d756dd1d6\": container with ID starting with bcbbc9c55f76fb8eaa4c26b198a288e7c0128c4e3866fdaa039ade4d756dd1d6 not found: ID does not exist" containerID="bcbbc9c55f76fb8eaa4c26b198a288e7c0128c4e3866fdaa039ade4d756dd1d6" Dec 08 17:54:59 crc kubenswrapper[5118]: I1208 17:54:59.436555 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bcbbc9c55f76fb8eaa4c26b198a288e7c0128c4e3866fdaa039ade4d756dd1d6"} err="failed to get container status \"bcbbc9c55f76fb8eaa4c26b198a288e7c0128c4e3866fdaa039ade4d756dd1d6\": rpc error: code = NotFound desc = could not find container \"bcbbc9c55f76fb8eaa4c26b198a288e7c0128c4e3866fdaa039ade4d756dd1d6\": container with ID starting with bcbbc9c55f76fb8eaa4c26b198a288e7c0128c4e3866fdaa039ade4d756dd1d6 not found: ID does not exist" Dec 08 17:54:59 crc kubenswrapper[5118]: I1208 17:54:59.436646 5118 scope.go:117] "RemoveContainer" containerID="3b06259c1dc4b7c5d0014655e176028d295d2ef1f8fb2b7d5951a7406ecdc49b" Dec 08 17:54:59 crc kubenswrapper[5118]: E1208 17:54:59.437215 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3b06259c1dc4b7c5d0014655e176028d295d2ef1f8fb2b7d5951a7406ecdc49b\": container with ID starting with 3b06259c1dc4b7c5d0014655e176028d295d2ef1f8fb2b7d5951a7406ecdc49b not found: ID does not exist" containerID="3b06259c1dc4b7c5d0014655e176028d295d2ef1f8fb2b7d5951a7406ecdc49b" Dec 08 17:54:59 crc kubenswrapper[5118]: I1208 17:54:59.437264 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b06259c1dc4b7c5d0014655e176028d295d2ef1f8fb2b7d5951a7406ecdc49b"} err="failed to get container status \"3b06259c1dc4b7c5d0014655e176028d295d2ef1f8fb2b7d5951a7406ecdc49b\": rpc error: code = NotFound desc = could not find container \"3b06259c1dc4b7c5d0014655e176028d295d2ef1f8fb2b7d5951a7406ecdc49b\": container with ID starting with 3b06259c1dc4b7c5d0014655e176028d295d2ef1f8fb2b7d5951a7406ecdc49b not found: ID does not exist" Dec 08 17:54:59 crc kubenswrapper[5118]: I1208 17:54:59.437298 5118 scope.go:117] "RemoveContainer" containerID="9628e09e3160b0e6d8dc51a9683f96edd9bbadb5229724b8477aef2ec1287475" Dec 08 17:54:59 crc kubenswrapper[5118]: E1208 17:54:59.437623 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9628e09e3160b0e6d8dc51a9683f96edd9bbadb5229724b8477aef2ec1287475\": container with ID starting with 9628e09e3160b0e6d8dc51a9683f96edd9bbadb5229724b8477aef2ec1287475 not found: ID does not exist" containerID="9628e09e3160b0e6d8dc51a9683f96edd9bbadb5229724b8477aef2ec1287475" Dec 08 17:54:59 crc kubenswrapper[5118]: I1208 17:54:59.437673 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9628e09e3160b0e6d8dc51a9683f96edd9bbadb5229724b8477aef2ec1287475"} err="failed to get container status \"9628e09e3160b0e6d8dc51a9683f96edd9bbadb5229724b8477aef2ec1287475\": rpc error: code = NotFound desc = could not find container \"9628e09e3160b0e6d8dc51a9683f96edd9bbadb5229724b8477aef2ec1287475\": container with ID starting with 9628e09e3160b0e6d8dc51a9683f96edd9bbadb5229724b8477aef2ec1287475 not found: ID does not exist" Dec 08 17:54:59 crc kubenswrapper[5118]: I1208 17:54:59.643809 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hl4hq"] Dec 08 17:54:59 crc kubenswrapper[5118]: I1208 17:54:59.648198 5118 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-hl4hq"] Dec 08 17:55:00 crc kubenswrapper[5118]: I1208 17:55:00.323277 5118 generic.go:358] "Generic (PLEG): container finished" podID="8d941e2a-672c-4bb7-b8fc-314ecbcf7781" containerID="d79b2aa47d7a270fcb98edee8cefb939c8b23c89c125dfcd6df3944030747446" exitCode=0 Dec 08 17:55:00 crc kubenswrapper[5118]: I1208 17:55:00.323358 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210hkpv5" event={"ID":"8d941e2a-672c-4bb7-b8fc-314ecbcf7781","Type":"ContainerDied","Data":"d79b2aa47d7a270fcb98edee8cefb939c8b23c89c125dfcd6df3944030747446"} Dec 08 17:55:01 crc kubenswrapper[5118]: I1208 17:55:01.439866 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb8fcb4-9838-4dd2-93a0-5bb860fd915b" path="/var/lib/kubelet/pods/3cb8fcb4-9838-4dd2-93a0-5bb860fd915b/volumes" Dec 08 17:55:02 crc kubenswrapper[5118]: I1208 17:55:02.343258 5118 generic.go:358] "Generic (PLEG): container finished" podID="8d941e2a-672c-4bb7-b8fc-314ecbcf7781" containerID="849afd6bfddd75998bf715224509b794e49b15bbd8aa5d73822b750050ff5147" exitCode=0 Dec 08 17:55:02 crc kubenswrapper[5118]: I1208 17:55:02.343315 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210hkpv5" event={"ID":"8d941e2a-672c-4bb7-b8fc-314ecbcf7781","Type":"ContainerDied","Data":"849afd6bfddd75998bf715224509b794e49b15bbd8aa5d73822b750050ff5147"} Dec 08 17:55:03 crc kubenswrapper[5118]: I1208 17:55:03.350983 5118 generic.go:358] "Generic (PLEG): container finished" podID="8d941e2a-672c-4bb7-b8fc-314ecbcf7781" containerID="3bd5c8e0e4bc0384156dc9a7af15a2c877ae302116c6d5de14def048e620e0ab" exitCode=0 Dec 08 17:55:03 crc kubenswrapper[5118]: I1208 17:55:03.351043 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210hkpv5" event={"ID":"8d941e2a-672c-4bb7-b8fc-314ecbcf7781","Type":"ContainerDied","Data":"3bd5c8e0e4bc0384156dc9a7af15a2c877ae302116c6d5de14def048e620e0ab"} Dec 08 17:55:04 crc kubenswrapper[5118]: I1208 17:55:04.144519 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5etdr5f"] Dec 08 17:55:04 crc kubenswrapper[5118]: I1208 17:55:04.145762 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3cb8fcb4-9838-4dd2-93a0-5bb860fd915b" containerName="extract-content" Dec 08 17:55:04 crc kubenswrapper[5118]: I1208 17:55:04.145807 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cb8fcb4-9838-4dd2-93a0-5bb860fd915b" containerName="extract-content" Dec 08 17:55:04 crc kubenswrapper[5118]: I1208 17:55:04.145907 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3cb8fcb4-9838-4dd2-93a0-5bb860fd915b" containerName="registry-server" Dec 08 17:55:04 crc kubenswrapper[5118]: I1208 17:55:04.145935 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cb8fcb4-9838-4dd2-93a0-5bb860fd915b" containerName="registry-server" Dec 08 17:55:04 crc kubenswrapper[5118]: I1208 17:55:04.145984 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3cb8fcb4-9838-4dd2-93a0-5bb860fd915b" containerName="extract-utilities" Dec 08 17:55:04 crc kubenswrapper[5118]: I1208 17:55:04.146009 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cb8fcb4-9838-4dd2-93a0-5bb860fd915b" containerName="extract-utilities" Dec 08 17:55:04 crc kubenswrapper[5118]: I1208 17:55:04.146229 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="3cb8fcb4-9838-4dd2-93a0-5bb860fd915b" containerName="registry-server" Dec 08 17:55:04 crc kubenswrapper[5118]: I1208 17:55:04.167611 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5etdr5f"] Dec 08 17:55:04 crc kubenswrapper[5118]: I1208 17:55:04.167824 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5etdr5f" Dec 08 17:55:04 crc kubenswrapper[5118]: I1208 17:55:04.195298 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4d041d5b-762b-4616-bc8a-d21727bd0547-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5etdr5f\" (UID: \"4d041d5b-762b-4616-bc8a-d21727bd0547\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5etdr5f" Dec 08 17:55:04 crc kubenswrapper[5118]: I1208 17:55:04.195378 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsj5w\" (UniqueName: \"kubernetes.io/projected/4d041d5b-762b-4616-bc8a-d21727bd0547-kube-api-access-rsj5w\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5etdr5f\" (UID: \"4d041d5b-762b-4616-bc8a-d21727bd0547\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5etdr5f" Dec 08 17:55:04 crc kubenswrapper[5118]: I1208 17:55:04.195458 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4d041d5b-762b-4616-bc8a-d21727bd0547-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5etdr5f\" (UID: \"4d041d5b-762b-4616-bc8a-d21727bd0547\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5etdr5f" Dec 08 17:55:04 crc kubenswrapper[5118]: I1208 17:55:04.296633 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rsj5w\" (UniqueName: \"kubernetes.io/projected/4d041d5b-762b-4616-bc8a-d21727bd0547-kube-api-access-rsj5w\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5etdr5f\" (UID: \"4d041d5b-762b-4616-bc8a-d21727bd0547\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5etdr5f" Dec 08 17:55:04 crc kubenswrapper[5118]: I1208 17:55:04.296724 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4d041d5b-762b-4616-bc8a-d21727bd0547-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5etdr5f\" (UID: \"4d041d5b-762b-4616-bc8a-d21727bd0547\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5etdr5f" Dec 08 17:55:04 crc kubenswrapper[5118]: I1208 17:55:04.296798 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4d041d5b-762b-4616-bc8a-d21727bd0547-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5etdr5f\" (UID: \"4d041d5b-762b-4616-bc8a-d21727bd0547\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5etdr5f" Dec 08 17:55:04 crc kubenswrapper[5118]: I1208 17:55:04.297351 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4d041d5b-762b-4616-bc8a-d21727bd0547-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5etdr5f\" (UID: \"4d041d5b-762b-4616-bc8a-d21727bd0547\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5etdr5f" Dec 08 17:55:04 crc kubenswrapper[5118]: I1208 17:55:04.297635 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4d041d5b-762b-4616-bc8a-d21727bd0547-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5etdr5f\" (UID: \"4d041d5b-762b-4616-bc8a-d21727bd0547\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5etdr5f" Dec 08 17:55:04 crc kubenswrapper[5118]: I1208 17:55:04.316977 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rsj5w\" (UniqueName: \"kubernetes.io/projected/4d041d5b-762b-4616-bc8a-d21727bd0547-kube-api-access-rsj5w\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5etdr5f\" (UID: \"4d041d5b-762b-4616-bc8a-d21727bd0547\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5etdr5f" Dec 08 17:55:04 crc kubenswrapper[5118]: I1208 17:55:04.488646 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5etdr5f" Dec 08 17:55:04 crc kubenswrapper[5118]: I1208 17:55:04.543554 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a4jmgj"] Dec 08 17:55:04 crc kubenswrapper[5118]: I1208 17:55:04.644930 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a4jmgj"] Dec 08 17:55:04 crc kubenswrapper[5118]: I1208 17:55:04.645351 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a4jmgj" Dec 08 17:55:04 crc kubenswrapper[5118]: I1208 17:55:04.650769 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210hkpv5" Dec 08 17:55:04 crc kubenswrapper[5118]: I1208 17:55:04.704040 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9t6tg\" (UniqueName: \"kubernetes.io/projected/8d941e2a-672c-4bb7-b8fc-314ecbcf7781-kube-api-access-9t6tg\") pod \"8d941e2a-672c-4bb7-b8fc-314ecbcf7781\" (UID: \"8d941e2a-672c-4bb7-b8fc-314ecbcf7781\") " Dec 08 17:55:04 crc kubenswrapper[5118]: I1208 17:55:04.704095 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8d941e2a-672c-4bb7-b8fc-314ecbcf7781-util\") pod \"8d941e2a-672c-4bb7-b8fc-314ecbcf7781\" (UID: \"8d941e2a-672c-4bb7-b8fc-314ecbcf7781\") " Dec 08 17:55:04 crc kubenswrapper[5118]: I1208 17:55:04.704125 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8d941e2a-672c-4bb7-b8fc-314ecbcf7781-bundle\") pod \"8d941e2a-672c-4bb7-b8fc-314ecbcf7781\" (UID: \"8d941e2a-672c-4bb7-b8fc-314ecbcf7781\") " Dec 08 17:55:04 crc kubenswrapper[5118]: I1208 17:55:04.704313 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snmmg\" (UniqueName: \"kubernetes.io/projected/0b5d1008-e7ed-481b-85c2-5f359d8eda2d-kube-api-access-snmmg\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a4jmgj\" (UID: \"0b5d1008-e7ed-481b-85c2-5f359d8eda2d\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a4jmgj" Dec 08 17:55:04 crc kubenswrapper[5118]: I1208 17:55:04.704343 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0b5d1008-e7ed-481b-85c2-5f359d8eda2d-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a4jmgj\" (UID: \"0b5d1008-e7ed-481b-85c2-5f359d8eda2d\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a4jmgj" Dec 08 17:55:04 crc kubenswrapper[5118]: I1208 17:55:04.704379 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0b5d1008-e7ed-481b-85c2-5f359d8eda2d-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a4jmgj\" (UID: \"0b5d1008-e7ed-481b-85c2-5f359d8eda2d\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a4jmgj" Dec 08 17:55:04 crc kubenswrapper[5118]: I1208 17:55:04.707389 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d941e2a-672c-4bb7-b8fc-314ecbcf7781-bundle" (OuterVolumeSpecName: "bundle") pod "8d941e2a-672c-4bb7-b8fc-314ecbcf7781" (UID: "8d941e2a-672c-4bb7-b8fc-314ecbcf7781"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:55:04 crc kubenswrapper[5118]: I1208 17:55:04.708983 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d941e2a-672c-4bb7-b8fc-314ecbcf7781-kube-api-access-9t6tg" (OuterVolumeSpecName: "kube-api-access-9t6tg") pod "8d941e2a-672c-4bb7-b8fc-314ecbcf7781" (UID: "8d941e2a-672c-4bb7-b8fc-314ecbcf7781"). InnerVolumeSpecName "kube-api-access-9t6tg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:55:04 crc kubenswrapper[5118]: I1208 17:55:04.714285 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d941e2a-672c-4bb7-b8fc-314ecbcf7781-util" (OuterVolumeSpecName: "util") pod "8d941e2a-672c-4bb7-b8fc-314ecbcf7781" (UID: "8d941e2a-672c-4bb7-b8fc-314ecbcf7781"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:55:04 crc kubenswrapper[5118]: I1208 17:55:04.806145 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-snmmg\" (UniqueName: \"kubernetes.io/projected/0b5d1008-e7ed-481b-85c2-5f359d8eda2d-kube-api-access-snmmg\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a4jmgj\" (UID: \"0b5d1008-e7ed-481b-85c2-5f359d8eda2d\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a4jmgj" Dec 08 17:55:04 crc kubenswrapper[5118]: I1208 17:55:04.806219 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0b5d1008-e7ed-481b-85c2-5f359d8eda2d-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a4jmgj\" (UID: \"0b5d1008-e7ed-481b-85c2-5f359d8eda2d\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a4jmgj" Dec 08 17:55:04 crc kubenswrapper[5118]: I1208 17:55:04.806296 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0b5d1008-e7ed-481b-85c2-5f359d8eda2d-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a4jmgj\" (UID: \"0b5d1008-e7ed-481b-85c2-5f359d8eda2d\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a4jmgj" Dec 08 17:55:04 crc kubenswrapper[5118]: I1208 17:55:04.806709 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0b5d1008-e7ed-481b-85c2-5f359d8eda2d-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a4jmgj\" (UID: \"0b5d1008-e7ed-481b-85c2-5f359d8eda2d\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a4jmgj" Dec 08 17:55:04 crc kubenswrapper[5118]: I1208 17:55:04.806961 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0b5d1008-e7ed-481b-85c2-5f359d8eda2d-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a4jmgj\" (UID: \"0b5d1008-e7ed-481b-85c2-5f359d8eda2d\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a4jmgj" Dec 08 17:55:04 crc kubenswrapper[5118]: I1208 17:55:04.807206 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9t6tg\" (UniqueName: \"kubernetes.io/projected/8d941e2a-672c-4bb7-b8fc-314ecbcf7781-kube-api-access-9t6tg\") on node \"crc\" DevicePath \"\"" Dec 08 17:55:04 crc kubenswrapper[5118]: I1208 17:55:04.807237 5118 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8d941e2a-672c-4bb7-b8fc-314ecbcf7781-util\") on node \"crc\" DevicePath \"\"" Dec 08 17:55:04 crc kubenswrapper[5118]: I1208 17:55:04.807255 5118 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8d941e2a-672c-4bb7-b8fc-314ecbcf7781-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 17:55:04 crc kubenswrapper[5118]: I1208 17:55:04.826662 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-snmmg\" (UniqueName: \"kubernetes.io/projected/0b5d1008-e7ed-481b-85c2-5f359d8eda2d-kube-api-access-snmmg\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a4jmgj\" (UID: \"0b5d1008-e7ed-481b-85c2-5f359d8eda2d\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a4jmgj" Dec 08 17:55:04 crc kubenswrapper[5118]: I1208 17:55:04.963244 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a4jmgj" Dec 08 17:55:04 crc kubenswrapper[5118]: I1208 17:55:04.972429 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5etdr5f"] Dec 08 17:55:04 crc kubenswrapper[5118]: W1208 17:55:04.987230 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4d041d5b_762b_4616_bc8a_d21727bd0547.slice/crio-778e9a31359bff55f065276d20f080996ec0132459d6dad1578f82a69aa467d9 WatchSource:0}: Error finding container 778e9a31359bff55f065276d20f080996ec0132459d6dad1578f82a69aa467d9: Status 404 returned error can't find the container with id 778e9a31359bff55f065276d20f080996ec0132459d6dad1578f82a69aa467d9 Dec 08 17:55:05 crc kubenswrapper[5118]: I1208 17:55:05.194336 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a4jmgj"] Dec 08 17:55:05 crc kubenswrapper[5118]: I1208 17:55:05.365124 5118 generic.go:358] "Generic (PLEG): container finished" podID="4d041d5b-762b-4616-bc8a-d21727bd0547" containerID="0827effe65da04518ff5acb503b106709427d127c20c78dff317535215b8b7b9" exitCode=0 Dec 08 17:55:05 crc kubenswrapper[5118]: I1208 17:55:05.365311 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5etdr5f" event={"ID":"4d041d5b-762b-4616-bc8a-d21727bd0547","Type":"ContainerDied","Data":"0827effe65da04518ff5acb503b106709427d127c20c78dff317535215b8b7b9"} Dec 08 17:55:05 crc kubenswrapper[5118]: I1208 17:55:05.365641 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5etdr5f" event={"ID":"4d041d5b-762b-4616-bc8a-d21727bd0547","Type":"ContainerStarted","Data":"778e9a31359bff55f065276d20f080996ec0132459d6dad1578f82a69aa467d9"} Dec 08 17:55:05 crc kubenswrapper[5118]: I1208 17:55:05.370290 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210hkpv5" event={"ID":"8d941e2a-672c-4bb7-b8fc-314ecbcf7781","Type":"ContainerDied","Data":"2a7c27be34f0ea80d0d8deca84039524de988c06829e6374708a625c6a090285"} Dec 08 17:55:05 crc kubenswrapper[5118]: I1208 17:55:05.370323 5118 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2a7c27be34f0ea80d0d8deca84039524de988c06829e6374708a625c6a090285" Dec 08 17:55:05 crc kubenswrapper[5118]: I1208 17:55:05.371027 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210hkpv5" Dec 08 17:55:05 crc kubenswrapper[5118]: I1208 17:55:05.379854 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a4jmgj" event={"ID":"0b5d1008-e7ed-481b-85c2-5f359d8eda2d","Type":"ContainerStarted","Data":"12d2058c3a177fc3f78a12f5b7a9d7e024b1782e4557e2706d904b7f28ab7945"} Dec 08 17:55:05 crc kubenswrapper[5118]: I1208 17:55:05.379917 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a4jmgj" event={"ID":"0b5d1008-e7ed-481b-85c2-5f359d8eda2d","Type":"ContainerStarted","Data":"2935ada013e6ac1600bb19077c21a67804e238011fc0ce83d6bf8ec21c5000ce"} Dec 08 17:55:06 crc kubenswrapper[5118]: I1208 17:55:06.390732 5118 generic.go:358] "Generic (PLEG): container finished" podID="0b5d1008-e7ed-481b-85c2-5f359d8eda2d" containerID="12d2058c3a177fc3f78a12f5b7a9d7e024b1782e4557e2706d904b7f28ab7945" exitCode=0 Dec 08 17:55:06 crc kubenswrapper[5118]: I1208 17:55:06.390804 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a4jmgj" event={"ID":"0b5d1008-e7ed-481b-85c2-5f359d8eda2d","Type":"ContainerDied","Data":"12d2058c3a177fc3f78a12f5b7a9d7e024b1782e4557e2706d904b7f28ab7945"} Dec 08 17:55:06 crc kubenswrapper[5118]: I1208 17:55:06.394399 5118 generic.go:358] "Generic (PLEG): container finished" podID="4d041d5b-762b-4616-bc8a-d21727bd0547" containerID="bc0ee9f98152de1ffacf438e50f67c038913ecb9fc5c7ffefd08e0e2c6dc450c" exitCode=0 Dec 08 17:55:06 crc kubenswrapper[5118]: I1208 17:55:06.394631 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5etdr5f" event={"ID":"4d041d5b-762b-4616-bc8a-d21727bd0547","Type":"ContainerDied","Data":"bc0ee9f98152de1ffacf438e50f67c038913ecb9fc5c7ffefd08e0e2c6dc450c"} Dec 08 17:55:07 crc kubenswrapper[5118]: I1208 17:55:07.405626 5118 generic.go:358] "Generic (PLEG): container finished" podID="4d041d5b-762b-4616-bc8a-d21727bd0547" containerID="2feec566b948d612f2e0e363d8bacb1a4286b82de96183776a39f2a825969094" exitCode=0 Dec 08 17:55:07 crc kubenswrapper[5118]: I1208 17:55:07.405680 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5etdr5f" event={"ID":"4d041d5b-762b-4616-bc8a-d21727bd0547","Type":"ContainerDied","Data":"2feec566b948d612f2e0e363d8bacb1a4286b82de96183776a39f2a825969094"} Dec 08 17:55:08 crc kubenswrapper[5118]: I1208 17:55:08.696411 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5etdr5f" Dec 08 17:55:08 crc kubenswrapper[5118]: I1208 17:55:08.771091 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4d041d5b-762b-4616-bc8a-d21727bd0547-bundle\") pod \"4d041d5b-762b-4616-bc8a-d21727bd0547\" (UID: \"4d041d5b-762b-4616-bc8a-d21727bd0547\") " Dec 08 17:55:08 crc kubenswrapper[5118]: I1208 17:55:08.771191 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rsj5w\" (UniqueName: \"kubernetes.io/projected/4d041d5b-762b-4616-bc8a-d21727bd0547-kube-api-access-rsj5w\") pod \"4d041d5b-762b-4616-bc8a-d21727bd0547\" (UID: \"4d041d5b-762b-4616-bc8a-d21727bd0547\") " Dec 08 17:55:08 crc kubenswrapper[5118]: I1208 17:55:08.771257 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4d041d5b-762b-4616-bc8a-d21727bd0547-util\") pod \"4d041d5b-762b-4616-bc8a-d21727bd0547\" (UID: \"4d041d5b-762b-4616-bc8a-d21727bd0547\") " Dec 08 17:55:08 crc kubenswrapper[5118]: I1208 17:55:08.772011 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d041d5b-762b-4616-bc8a-d21727bd0547-bundle" (OuterVolumeSpecName: "bundle") pod "4d041d5b-762b-4616-bc8a-d21727bd0547" (UID: "4d041d5b-762b-4616-bc8a-d21727bd0547"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:55:08 crc kubenswrapper[5118]: I1208 17:55:08.796069 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d041d5b-762b-4616-bc8a-d21727bd0547-kube-api-access-rsj5w" (OuterVolumeSpecName: "kube-api-access-rsj5w") pod "4d041d5b-762b-4616-bc8a-d21727bd0547" (UID: "4d041d5b-762b-4616-bc8a-d21727bd0547"). InnerVolumeSpecName "kube-api-access-rsj5w". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:55:08 crc kubenswrapper[5118]: I1208 17:55:08.801448 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d041d5b-762b-4616-bc8a-d21727bd0547-util" (OuterVolumeSpecName: "util") pod "4d041d5b-762b-4616-bc8a-d21727bd0547" (UID: "4d041d5b-762b-4616-bc8a-d21727bd0547"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:55:08 crc kubenswrapper[5118]: I1208 17:55:08.873301 5118 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4d041d5b-762b-4616-bc8a-d21727bd0547-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 17:55:08 crc kubenswrapper[5118]: I1208 17:55:08.873341 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rsj5w\" (UniqueName: \"kubernetes.io/projected/4d041d5b-762b-4616-bc8a-d21727bd0547-kube-api-access-rsj5w\") on node \"crc\" DevicePath \"\"" Dec 08 17:55:08 crc kubenswrapper[5118]: I1208 17:55:08.873353 5118 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4d041d5b-762b-4616-bc8a-d21727bd0547-util\") on node \"crc\" DevicePath \"\"" Dec 08 17:55:09 crc kubenswrapper[5118]: I1208 17:55:09.421552 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5etdr5f" Dec 08 17:55:09 crc kubenswrapper[5118]: I1208 17:55:09.421568 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5etdr5f" event={"ID":"4d041d5b-762b-4616-bc8a-d21727bd0547","Type":"ContainerDied","Data":"778e9a31359bff55f065276d20f080996ec0132459d6dad1578f82a69aa467d9"} Dec 08 17:55:09 crc kubenswrapper[5118]: I1208 17:55:09.421636 5118 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="778e9a31359bff55f065276d20f080996ec0132459d6dad1578f82a69aa467d9" Dec 08 17:55:12 crc kubenswrapper[5118]: I1208 17:55:12.438691 5118 generic.go:358] "Generic (PLEG): container finished" podID="0b5d1008-e7ed-481b-85c2-5f359d8eda2d" containerID="13487af0cd8b05f91e2aa887bcaa0b8b5667962a2918701bb5a838741ecd89ec" exitCode=0 Dec 08 17:55:12 crc kubenswrapper[5118]: I1208 17:55:12.438788 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a4jmgj" event={"ID":"0b5d1008-e7ed-481b-85c2-5f359d8eda2d","Type":"ContainerDied","Data":"13487af0cd8b05f91e2aa887bcaa0b8b5667962a2918701bb5a838741ecd89ec"} Dec 08 17:55:13 crc kubenswrapper[5118]: I1208 17:55:13.266275 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-5d9d95bf5b-cmjbz" Dec 08 17:55:13 crc kubenswrapper[5118]: I1208 17:55:13.369353 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-s6hn4"] Dec 08 17:55:13 crc kubenswrapper[5118]: I1208 17:55:13.471175 5118 generic.go:358] "Generic (PLEG): container finished" podID="0b5d1008-e7ed-481b-85c2-5f359d8eda2d" containerID="5894db6f4f1a330ea6ee6a9a25acdb0eaa563b983b6a995f98ec9365cb2f3d73" exitCode=0 Dec 08 17:55:13 crc kubenswrapper[5118]: I1208 17:55:13.471489 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a4jmgj" event={"ID":"0b5d1008-e7ed-481b-85c2-5f359d8eda2d","Type":"ContainerDied","Data":"5894db6f4f1a330ea6ee6a9a25acdb0eaa563b983b6a995f98ec9365cb2f3d73"} Dec 08 17:55:14 crc kubenswrapper[5118]: I1208 17:55:14.742457 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a4jmgj" Dec 08 17:55:14 crc kubenswrapper[5118]: I1208 17:55:14.850685 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-snmmg\" (UniqueName: \"kubernetes.io/projected/0b5d1008-e7ed-481b-85c2-5f359d8eda2d-kube-api-access-snmmg\") pod \"0b5d1008-e7ed-481b-85c2-5f359d8eda2d\" (UID: \"0b5d1008-e7ed-481b-85c2-5f359d8eda2d\") " Dec 08 17:55:14 crc kubenswrapper[5118]: I1208 17:55:14.850855 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0b5d1008-e7ed-481b-85c2-5f359d8eda2d-util\") pod \"0b5d1008-e7ed-481b-85c2-5f359d8eda2d\" (UID: \"0b5d1008-e7ed-481b-85c2-5f359d8eda2d\") " Dec 08 17:55:14 crc kubenswrapper[5118]: I1208 17:55:14.850889 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0b5d1008-e7ed-481b-85c2-5f359d8eda2d-bundle\") pod \"0b5d1008-e7ed-481b-85c2-5f359d8eda2d\" (UID: \"0b5d1008-e7ed-481b-85c2-5f359d8eda2d\") " Dec 08 17:55:14 crc kubenswrapper[5118]: I1208 17:55:14.851780 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b5d1008-e7ed-481b-85c2-5f359d8eda2d-bundle" (OuterVolumeSpecName: "bundle") pod "0b5d1008-e7ed-481b-85c2-5f359d8eda2d" (UID: "0b5d1008-e7ed-481b-85c2-5f359d8eda2d"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:55:14 crc kubenswrapper[5118]: I1208 17:55:14.856973 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b5d1008-e7ed-481b-85c2-5f359d8eda2d-kube-api-access-snmmg" (OuterVolumeSpecName: "kube-api-access-snmmg") pod "0b5d1008-e7ed-481b-85c2-5f359d8eda2d" (UID: "0b5d1008-e7ed-481b-85c2-5f359d8eda2d"). InnerVolumeSpecName "kube-api-access-snmmg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:55:14 crc kubenswrapper[5118]: I1208 17:55:14.865503 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b5d1008-e7ed-481b-85c2-5f359d8eda2d-util" (OuterVolumeSpecName: "util") pod "0b5d1008-e7ed-481b-85c2-5f359d8eda2d" (UID: "0b5d1008-e7ed-481b-85c2-5f359d8eda2d"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:55:14 crc kubenswrapper[5118]: I1208 17:55:14.952373 5118 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0b5d1008-e7ed-481b-85c2-5f359d8eda2d-util\") on node \"crc\" DevicePath \"\"" Dec 08 17:55:14 crc kubenswrapper[5118]: I1208 17:55:14.952407 5118 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0b5d1008-e7ed-481b-85c2-5f359d8eda2d-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 17:55:14 crc kubenswrapper[5118]: I1208 17:55:14.952419 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-snmmg\" (UniqueName: \"kubernetes.io/projected/0b5d1008-e7ed-481b-85c2-5f359d8eda2d-kube-api-access-snmmg\") on node \"crc\" DevicePath \"\"" Dec 08 17:55:15 crc kubenswrapper[5118]: I1208 17:55:15.483639 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a4jmgj" event={"ID":"0b5d1008-e7ed-481b-85c2-5f359d8eda2d","Type":"ContainerDied","Data":"2935ada013e6ac1600bb19077c21a67804e238011fc0ce83d6bf8ec21c5000ce"} Dec 08 17:55:15 crc kubenswrapper[5118]: I1208 17:55:15.483946 5118 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2935ada013e6ac1600bb19077c21a67804e238011fc0ce83d6bf8ec21c5000ce" Dec 08 17:55:15 crc kubenswrapper[5118]: I1208 17:55:15.483691 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a4jmgj" Dec 08 17:55:15 crc kubenswrapper[5118]: I1208 17:55:15.616248 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-86648f486b-4j9kn"] Dec 08 17:55:15 crc kubenswrapper[5118]: I1208 17:55:15.617002 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8d941e2a-672c-4bb7-b8fc-314ecbcf7781" containerName="util" Dec 08 17:55:15 crc kubenswrapper[5118]: I1208 17:55:15.617025 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d941e2a-672c-4bb7-b8fc-314ecbcf7781" containerName="util" Dec 08 17:55:15 crc kubenswrapper[5118]: I1208 17:55:15.617043 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0b5d1008-e7ed-481b-85c2-5f359d8eda2d" containerName="util" Dec 08 17:55:15 crc kubenswrapper[5118]: I1208 17:55:15.617050 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b5d1008-e7ed-481b-85c2-5f359d8eda2d" containerName="util" Dec 08 17:55:15 crc kubenswrapper[5118]: I1208 17:55:15.617058 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0b5d1008-e7ed-481b-85c2-5f359d8eda2d" containerName="pull" Dec 08 17:55:15 crc kubenswrapper[5118]: I1208 17:55:15.617066 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b5d1008-e7ed-481b-85c2-5f359d8eda2d" containerName="pull" Dec 08 17:55:15 crc kubenswrapper[5118]: I1208 17:55:15.617077 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4d041d5b-762b-4616-bc8a-d21727bd0547" containerName="extract" Dec 08 17:55:15 crc kubenswrapper[5118]: I1208 17:55:15.617083 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d041d5b-762b-4616-bc8a-d21727bd0547" containerName="extract" Dec 08 17:55:15 crc kubenswrapper[5118]: I1208 17:55:15.617100 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8d941e2a-672c-4bb7-b8fc-314ecbcf7781" containerName="extract" Dec 08 17:55:15 crc kubenswrapper[5118]: I1208 17:55:15.617106 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d941e2a-672c-4bb7-b8fc-314ecbcf7781" containerName="extract" Dec 08 17:55:15 crc kubenswrapper[5118]: I1208 17:55:15.617113 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4d041d5b-762b-4616-bc8a-d21727bd0547" containerName="pull" Dec 08 17:55:15 crc kubenswrapper[5118]: I1208 17:55:15.617119 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d041d5b-762b-4616-bc8a-d21727bd0547" containerName="pull" Dec 08 17:55:15 crc kubenswrapper[5118]: I1208 17:55:15.617135 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4d041d5b-762b-4616-bc8a-d21727bd0547" containerName="util" Dec 08 17:55:15 crc kubenswrapper[5118]: I1208 17:55:15.617142 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d041d5b-762b-4616-bc8a-d21727bd0547" containerName="util" Dec 08 17:55:15 crc kubenswrapper[5118]: I1208 17:55:15.617150 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0b5d1008-e7ed-481b-85c2-5f359d8eda2d" containerName="extract" Dec 08 17:55:15 crc kubenswrapper[5118]: I1208 17:55:15.617158 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b5d1008-e7ed-481b-85c2-5f359d8eda2d" containerName="extract" Dec 08 17:55:15 crc kubenswrapper[5118]: I1208 17:55:15.617169 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8d941e2a-672c-4bb7-b8fc-314ecbcf7781" containerName="pull" Dec 08 17:55:15 crc kubenswrapper[5118]: I1208 17:55:15.617176 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d941e2a-672c-4bb7-b8fc-314ecbcf7781" containerName="pull" Dec 08 17:55:15 crc kubenswrapper[5118]: I1208 17:55:15.617298 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="0b5d1008-e7ed-481b-85c2-5f359d8eda2d" containerName="extract" Dec 08 17:55:15 crc kubenswrapper[5118]: I1208 17:55:15.617315 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="8d941e2a-672c-4bb7-b8fc-314ecbcf7781" containerName="extract" Dec 08 17:55:15 crc kubenswrapper[5118]: I1208 17:55:15.617330 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="4d041d5b-762b-4616-bc8a-d21727bd0547" containerName="extract" Dec 08 17:55:15 crc kubenswrapper[5118]: I1208 17:55:15.625427 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-86648f486b-4j9kn" Dec 08 17:55:15 crc kubenswrapper[5118]: I1208 17:55:15.626673 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-86648f486b-4j9kn"] Dec 08 17:55:15 crc kubenswrapper[5118]: I1208 17:55:15.629659 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operators\"/\"kube-root-ca.crt\"" Dec 08 17:55:15 crc kubenswrapper[5118]: I1208 17:55:15.629752 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operators\"/\"openshift-service-ca.crt\"" Dec 08 17:55:15 crc kubenswrapper[5118]: I1208 17:55:15.630043 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-dockercfg-4qph4\"" Dec 08 17:55:15 crc kubenswrapper[5118]: I1208 17:55:15.736341 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5b9dc645c4-9pj5t"] Dec 08 17:55:15 crc kubenswrapper[5118]: I1208 17:55:15.759084 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5b9dc645c4-9pj5t"] Dec 08 17:55:15 crc kubenswrapper[5118]: I1208 17:55:15.759123 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5b9dc645c4-9dkcm"] Dec 08 17:55:15 crc kubenswrapper[5118]: I1208 17:55:15.759289 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b9dc645c4-9pj5t" Dec 08 17:55:15 crc kubenswrapper[5118]: I1208 17:55:15.762339 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlt9r\" (UniqueName: \"kubernetes.io/projected/abff26d8-ffb7-4ac9-b7ac-2eb4e66847fd-kube-api-access-rlt9r\") pod \"obo-prometheus-operator-86648f486b-4j9kn\" (UID: \"abff26d8-ffb7-4ac9-b7ac-2eb4e66847fd\") " pod="openshift-operators/obo-prometheus-operator-86648f486b-4j9kn" Dec 08 17:55:15 crc kubenswrapper[5118]: I1208 17:55:15.762617 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-admission-webhook-dockercfg-wpwsd\"" Dec 08 17:55:15 crc kubenswrapper[5118]: I1208 17:55:15.762617 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-admission-webhook-service-cert\"" Dec 08 17:55:15 crc kubenswrapper[5118]: I1208 17:55:15.763071 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b9dc645c4-9dkcm" Dec 08 17:55:15 crc kubenswrapper[5118]: I1208 17:55:15.769154 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5b9dc645c4-9dkcm"] Dec 08 17:55:15 crc kubenswrapper[5118]: I1208 17:55:15.863788 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rlt9r\" (UniqueName: \"kubernetes.io/projected/abff26d8-ffb7-4ac9-b7ac-2eb4e66847fd-kube-api-access-rlt9r\") pod \"obo-prometheus-operator-86648f486b-4j9kn\" (UID: \"abff26d8-ffb7-4ac9-b7ac-2eb4e66847fd\") " pod="openshift-operators/obo-prometheus-operator-86648f486b-4j9kn" Dec 08 17:55:15 crc kubenswrapper[5118]: I1208 17:55:15.863911 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b0b7331f-5f3a-41e7-84d0-64a9aa478c60-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5b9dc645c4-9dkcm\" (UID: \"b0b7331f-5f3a-41e7-84d0-64a9aa478c60\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b9dc645c4-9dkcm" Dec 08 17:55:15 crc kubenswrapper[5118]: I1208 17:55:15.863934 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/174b7c35-bd90-4386-a01d-b20d986df7e5-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5b9dc645c4-9pj5t\" (UID: \"174b7c35-bd90-4386-a01d-b20d986df7e5\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b9dc645c4-9pj5t" Dec 08 17:55:15 crc kubenswrapper[5118]: I1208 17:55:15.863956 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/174b7c35-bd90-4386-a01d-b20d986df7e5-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5b9dc645c4-9pj5t\" (UID: \"174b7c35-bd90-4386-a01d-b20d986df7e5\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b9dc645c4-9pj5t" Dec 08 17:55:15 crc kubenswrapper[5118]: I1208 17:55:15.864012 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b0b7331f-5f3a-41e7-84d0-64a9aa478c60-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5b9dc645c4-9dkcm\" (UID: \"b0b7331f-5f3a-41e7-84d0-64a9aa478c60\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b9dc645c4-9dkcm" Dec 08 17:55:15 crc kubenswrapper[5118]: I1208 17:55:15.880923 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rlt9r\" (UniqueName: \"kubernetes.io/projected/abff26d8-ffb7-4ac9-b7ac-2eb4e66847fd-kube-api-access-rlt9r\") pod \"obo-prometheus-operator-86648f486b-4j9kn\" (UID: \"abff26d8-ffb7-4ac9-b7ac-2eb4e66847fd\") " pod="openshift-operators/obo-prometheus-operator-86648f486b-4j9kn" Dec 08 17:55:15 crc kubenswrapper[5118]: I1208 17:55:15.926374 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-78c97476f4-mg4b2"] Dec 08 17:55:15 crc kubenswrapper[5118]: I1208 17:55:15.929817 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-78c97476f4-mg4b2" Dec 08 17:55:15 crc kubenswrapper[5118]: I1208 17:55:15.931933 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"observability-operator-tls\"" Dec 08 17:55:15 crc kubenswrapper[5118]: I1208 17:55:15.937617 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-78c97476f4-mg4b2"] Dec 08 17:55:15 crc kubenswrapper[5118]: I1208 17:55:15.942229 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-86648f486b-4j9kn" Dec 08 17:55:15 crc kubenswrapper[5118]: I1208 17:55:15.984937 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/174b7c35-bd90-4386-a01d-b20d986df7e5-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5b9dc645c4-9pj5t\" (UID: \"174b7c35-bd90-4386-a01d-b20d986df7e5\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b9dc645c4-9pj5t" Dec 08 17:55:15 crc kubenswrapper[5118]: I1208 17:55:15.985028 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b0b7331f-5f3a-41e7-84d0-64a9aa478c60-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5b9dc645c4-9dkcm\" (UID: \"b0b7331f-5f3a-41e7-84d0-64a9aa478c60\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b9dc645c4-9dkcm" Dec 08 17:55:15 crc kubenswrapper[5118]: I1208 17:55:15.985104 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b0b7331f-5f3a-41e7-84d0-64a9aa478c60-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5b9dc645c4-9dkcm\" (UID: \"b0b7331f-5f3a-41e7-84d0-64a9aa478c60\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b9dc645c4-9dkcm" Dec 08 17:55:15 crc kubenswrapper[5118]: I1208 17:55:15.985132 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/174b7c35-bd90-4386-a01d-b20d986df7e5-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5b9dc645c4-9pj5t\" (UID: \"174b7c35-bd90-4386-a01d-b20d986df7e5\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b9dc645c4-9pj5t" Dec 08 17:55:15 crc kubenswrapper[5118]: I1208 17:55:15.986035 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"observability-operator-sa-dockercfg-lq686\"" Dec 08 17:55:15 crc kubenswrapper[5118]: I1208 17:55:15.990379 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b0b7331f-5f3a-41e7-84d0-64a9aa478c60-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5b9dc645c4-9dkcm\" (UID: \"b0b7331f-5f3a-41e7-84d0-64a9aa478c60\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b9dc645c4-9dkcm" Dec 08 17:55:15 crc kubenswrapper[5118]: I1208 17:55:15.991932 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b0b7331f-5f3a-41e7-84d0-64a9aa478c60-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5b9dc645c4-9dkcm\" (UID: \"b0b7331f-5f3a-41e7-84d0-64a9aa478c60\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b9dc645c4-9dkcm" Dec 08 17:55:15 crc kubenswrapper[5118]: I1208 17:55:15.996441 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/174b7c35-bd90-4386-a01d-b20d986df7e5-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5b9dc645c4-9pj5t\" (UID: \"174b7c35-bd90-4386-a01d-b20d986df7e5\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b9dc645c4-9pj5t" Dec 08 17:55:16 crc kubenswrapper[5118]: I1208 17:55:16.011033 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/174b7c35-bd90-4386-a01d-b20d986df7e5-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5b9dc645c4-9pj5t\" (UID: \"174b7c35-bd90-4386-a01d-b20d986df7e5\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b9dc645c4-9pj5t" Dec 08 17:55:16 crc kubenswrapper[5118]: I1208 17:55:16.081805 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b9dc645c4-9pj5t" Dec 08 17:55:16 crc kubenswrapper[5118]: I1208 17:55:16.094682 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-794x7\" (UniqueName: \"kubernetes.io/projected/a7981d87-d276-41a7-ad7c-d6f0cde8fa7d-kube-api-access-794x7\") pod \"observability-operator-78c97476f4-mg4b2\" (UID: \"a7981d87-d276-41a7-ad7c-d6f0cde8fa7d\") " pod="openshift-operators/observability-operator-78c97476f4-mg4b2" Dec 08 17:55:16 crc kubenswrapper[5118]: I1208 17:55:16.094815 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/a7981d87-d276-41a7-ad7c-d6f0cde8fa7d-observability-operator-tls\") pod \"observability-operator-78c97476f4-mg4b2\" (UID: \"a7981d87-d276-41a7-ad7c-d6f0cde8fa7d\") " pod="openshift-operators/observability-operator-78c97476f4-mg4b2" Dec 08 17:55:16 crc kubenswrapper[5118]: I1208 17:55:16.095117 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b9dc645c4-9dkcm" Dec 08 17:55:16 crc kubenswrapper[5118]: I1208 17:55:16.192631 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-68bdb49cbf-m2cdr"] Dec 08 17:55:16 crc kubenswrapper[5118]: I1208 17:55:16.198623 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/a7981d87-d276-41a7-ad7c-d6f0cde8fa7d-observability-operator-tls\") pod \"observability-operator-78c97476f4-mg4b2\" (UID: \"a7981d87-d276-41a7-ad7c-d6f0cde8fa7d\") " pod="openshift-operators/observability-operator-78c97476f4-mg4b2" Dec 08 17:55:16 crc kubenswrapper[5118]: I1208 17:55:16.198723 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-794x7\" (UniqueName: \"kubernetes.io/projected/a7981d87-d276-41a7-ad7c-d6f0cde8fa7d-kube-api-access-794x7\") pod \"observability-operator-78c97476f4-mg4b2\" (UID: \"a7981d87-d276-41a7-ad7c-d6f0cde8fa7d\") " pod="openshift-operators/observability-operator-78c97476f4-mg4b2" Dec 08 17:55:16 crc kubenswrapper[5118]: I1208 17:55:16.205673 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/a7981d87-d276-41a7-ad7c-d6f0cde8fa7d-observability-operator-tls\") pod \"observability-operator-78c97476f4-mg4b2\" (UID: \"a7981d87-d276-41a7-ad7c-d6f0cde8fa7d\") " pod="openshift-operators/observability-operator-78c97476f4-mg4b2" Dec 08 17:55:16 crc kubenswrapper[5118]: I1208 17:55:16.214153 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-68bdb49cbf-m2cdr" Dec 08 17:55:16 crc kubenswrapper[5118]: I1208 17:55:16.219813 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"perses-operator-dockercfg-8bg9q\"" Dec 08 17:55:16 crc kubenswrapper[5118]: I1208 17:55:16.230608 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-68bdb49cbf-m2cdr"] Dec 08 17:55:16 crc kubenswrapper[5118]: I1208 17:55:16.233134 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-794x7\" (UniqueName: \"kubernetes.io/projected/a7981d87-d276-41a7-ad7c-d6f0cde8fa7d-kube-api-access-794x7\") pod \"observability-operator-78c97476f4-mg4b2\" (UID: \"a7981d87-d276-41a7-ad7c-d6f0cde8fa7d\") " pod="openshift-operators/observability-operator-78c97476f4-mg4b2" Dec 08 17:55:16 crc kubenswrapper[5118]: I1208 17:55:16.299858 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/eae302b5-bcca-41b8-9f24-34be44dd7f83-openshift-service-ca\") pod \"perses-operator-68bdb49cbf-m2cdr\" (UID: \"eae302b5-bcca-41b8-9f24-34be44dd7f83\") " pod="openshift-operators/perses-operator-68bdb49cbf-m2cdr" Dec 08 17:55:16 crc kubenswrapper[5118]: I1208 17:55:16.300016 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5t96\" (UniqueName: \"kubernetes.io/projected/eae302b5-bcca-41b8-9f24-34be44dd7f83-kube-api-access-z5t96\") pod \"perses-operator-68bdb49cbf-m2cdr\" (UID: \"eae302b5-bcca-41b8-9f24-34be44dd7f83\") " pod="openshift-operators/perses-operator-68bdb49cbf-m2cdr" Dec 08 17:55:16 crc kubenswrapper[5118]: I1208 17:55:16.321798 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-86648f486b-4j9kn"] Dec 08 17:55:16 crc kubenswrapper[5118]: I1208 17:55:16.342261 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-78c97476f4-mg4b2" Dec 08 17:55:16 crc kubenswrapper[5118]: I1208 17:55:16.401105 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-z5t96\" (UniqueName: \"kubernetes.io/projected/eae302b5-bcca-41b8-9f24-34be44dd7f83-kube-api-access-z5t96\") pod \"perses-operator-68bdb49cbf-m2cdr\" (UID: \"eae302b5-bcca-41b8-9f24-34be44dd7f83\") " pod="openshift-operators/perses-operator-68bdb49cbf-m2cdr" Dec 08 17:55:16 crc kubenswrapper[5118]: I1208 17:55:16.401415 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/eae302b5-bcca-41b8-9f24-34be44dd7f83-openshift-service-ca\") pod \"perses-operator-68bdb49cbf-m2cdr\" (UID: \"eae302b5-bcca-41b8-9f24-34be44dd7f83\") " pod="openshift-operators/perses-operator-68bdb49cbf-m2cdr" Dec 08 17:55:16 crc kubenswrapper[5118]: I1208 17:55:16.402453 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/eae302b5-bcca-41b8-9f24-34be44dd7f83-openshift-service-ca\") pod \"perses-operator-68bdb49cbf-m2cdr\" (UID: \"eae302b5-bcca-41b8-9f24-34be44dd7f83\") " pod="openshift-operators/perses-operator-68bdb49cbf-m2cdr" Dec 08 17:55:16 crc kubenswrapper[5118]: I1208 17:55:16.424654 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5t96\" (UniqueName: \"kubernetes.io/projected/eae302b5-bcca-41b8-9f24-34be44dd7f83-kube-api-access-z5t96\") pod \"perses-operator-68bdb49cbf-m2cdr\" (UID: \"eae302b5-bcca-41b8-9f24-34be44dd7f83\") " pod="openshift-operators/perses-operator-68bdb49cbf-m2cdr" Dec 08 17:55:16 crc kubenswrapper[5118]: I1208 17:55:16.492127 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-86648f486b-4j9kn" event={"ID":"abff26d8-ffb7-4ac9-b7ac-2eb4e66847fd","Type":"ContainerStarted","Data":"889a8c7004b017ad49fc63d83dc803d26fd9d9ca3514a1d964fbb16a472effb3"} Dec 08 17:55:16 crc kubenswrapper[5118]: I1208 17:55:16.557364 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-68bdb49cbf-m2cdr" Dec 08 17:55:16 crc kubenswrapper[5118]: I1208 17:55:16.681497 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5b9dc645c4-9dkcm"] Dec 08 17:55:16 crc kubenswrapper[5118]: I1208 17:55:16.776608 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/elastic-operator-c9c86658-4qchz"] Dec 08 17:55:16 crc kubenswrapper[5118]: I1208 17:55:16.796485 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5b9dc645c4-9pj5t"] Dec 08 17:55:16 crc kubenswrapper[5118]: I1208 17:55:16.796720 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-c9c86658-4qchz" Dec 08 17:55:16 crc kubenswrapper[5118]: I1208 17:55:16.805472 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elastic-operator-service-cert\"" Dec 08 17:55:16 crc kubenswrapper[5118]: I1208 17:55:16.806025 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-c9c86658-4qchz"] Dec 08 17:55:16 crc kubenswrapper[5118]: I1208 17:55:16.806318 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"kube-root-ca.crt\"" Dec 08 17:55:16 crc kubenswrapper[5118]: I1208 17:55:16.806564 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"openshift-service-ca.crt\"" Dec 08 17:55:16 crc kubenswrapper[5118]: I1208 17:55:16.807029 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elastic-operator-dockercfg-2vdv5\"" Dec 08 17:55:16 crc kubenswrapper[5118]: I1208 17:55:16.857330 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-78c97476f4-mg4b2"] Dec 08 17:55:16 crc kubenswrapper[5118]: W1208 17:55:16.863905 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda7981d87_d276_41a7_ad7c_d6f0cde8fa7d.slice/crio-b4449071ef2dfa477169a324e12c5b54db8a8d53b4056e856142b43c86c47931 WatchSource:0}: Error finding container b4449071ef2dfa477169a324e12c5b54db8a8d53b4056e856142b43c86c47931: Status 404 returned error can't find the container with id b4449071ef2dfa477169a324e12c5b54db8a8d53b4056e856142b43c86c47931 Dec 08 17:55:16 crc kubenswrapper[5118]: I1208 17:55:16.878140 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-68bdb49cbf-m2cdr"] Dec 08 17:55:16 crc kubenswrapper[5118]: I1208 17:55:16.913537 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1899106f-2682-474e-ad41-4dd00dbc7d4b-apiservice-cert\") pod \"elastic-operator-c9c86658-4qchz\" (UID: \"1899106f-2682-474e-ad41-4dd00dbc7d4b\") " pod="service-telemetry/elastic-operator-c9c86658-4qchz" Dec 08 17:55:16 crc kubenswrapper[5118]: I1208 17:55:16.913628 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1899106f-2682-474e-ad41-4dd00dbc7d4b-webhook-cert\") pod \"elastic-operator-c9c86658-4qchz\" (UID: \"1899106f-2682-474e-ad41-4dd00dbc7d4b\") " pod="service-telemetry/elastic-operator-c9c86658-4qchz" Dec 08 17:55:16 crc kubenswrapper[5118]: I1208 17:55:16.913698 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swd2b\" (UniqueName: \"kubernetes.io/projected/1899106f-2682-474e-ad41-4dd00dbc7d4b-kube-api-access-swd2b\") pod \"elastic-operator-c9c86658-4qchz\" (UID: \"1899106f-2682-474e-ad41-4dd00dbc7d4b\") " pod="service-telemetry/elastic-operator-c9c86658-4qchz" Dec 08 17:55:17 crc kubenswrapper[5118]: I1208 17:55:17.014717 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1899106f-2682-474e-ad41-4dd00dbc7d4b-webhook-cert\") pod \"elastic-operator-c9c86658-4qchz\" (UID: \"1899106f-2682-474e-ad41-4dd00dbc7d4b\") " pod="service-telemetry/elastic-operator-c9c86658-4qchz" Dec 08 17:55:17 crc kubenswrapper[5118]: I1208 17:55:17.014806 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-swd2b\" (UniqueName: \"kubernetes.io/projected/1899106f-2682-474e-ad41-4dd00dbc7d4b-kube-api-access-swd2b\") pod \"elastic-operator-c9c86658-4qchz\" (UID: \"1899106f-2682-474e-ad41-4dd00dbc7d4b\") " pod="service-telemetry/elastic-operator-c9c86658-4qchz" Dec 08 17:55:17 crc kubenswrapper[5118]: I1208 17:55:17.014864 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1899106f-2682-474e-ad41-4dd00dbc7d4b-apiservice-cert\") pod \"elastic-operator-c9c86658-4qchz\" (UID: \"1899106f-2682-474e-ad41-4dd00dbc7d4b\") " pod="service-telemetry/elastic-operator-c9c86658-4qchz" Dec 08 17:55:17 crc kubenswrapper[5118]: I1208 17:55:17.020942 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1899106f-2682-474e-ad41-4dd00dbc7d4b-apiservice-cert\") pod \"elastic-operator-c9c86658-4qchz\" (UID: \"1899106f-2682-474e-ad41-4dd00dbc7d4b\") " pod="service-telemetry/elastic-operator-c9c86658-4qchz" Dec 08 17:55:17 crc kubenswrapper[5118]: I1208 17:55:17.021185 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1899106f-2682-474e-ad41-4dd00dbc7d4b-webhook-cert\") pod \"elastic-operator-c9c86658-4qchz\" (UID: \"1899106f-2682-474e-ad41-4dd00dbc7d4b\") " pod="service-telemetry/elastic-operator-c9c86658-4qchz" Dec 08 17:55:17 crc kubenswrapper[5118]: I1208 17:55:17.036421 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-swd2b\" (UniqueName: \"kubernetes.io/projected/1899106f-2682-474e-ad41-4dd00dbc7d4b-kube-api-access-swd2b\") pod \"elastic-operator-c9c86658-4qchz\" (UID: \"1899106f-2682-474e-ad41-4dd00dbc7d4b\") " pod="service-telemetry/elastic-operator-c9c86658-4qchz" Dec 08 17:55:17 crc kubenswrapper[5118]: I1208 17:55:17.120226 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-c9c86658-4qchz" Dec 08 17:55:17 crc kubenswrapper[5118]: I1208 17:55:17.344006 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-c9c86658-4qchz"] Dec 08 17:55:17 crc kubenswrapper[5118]: W1208 17:55:17.345764 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1899106f_2682_474e_ad41_4dd00dbc7d4b.slice/crio-1ff35b59333394c98d2f3df1bede17bded5937bf35e3e28941b364dccb236ed3 WatchSource:0}: Error finding container 1ff35b59333394c98d2f3df1bede17bded5937bf35e3e28941b364dccb236ed3: Status 404 returned error can't find the container with id 1ff35b59333394c98d2f3df1bede17bded5937bf35e3e28941b364dccb236ed3 Dec 08 17:55:17 crc kubenswrapper[5118]: I1208 17:55:17.503028 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-78c97476f4-mg4b2" event={"ID":"a7981d87-d276-41a7-ad7c-d6f0cde8fa7d","Type":"ContainerStarted","Data":"b4449071ef2dfa477169a324e12c5b54db8a8d53b4056e856142b43c86c47931"} Dec 08 17:55:17 crc kubenswrapper[5118]: I1208 17:55:17.505482 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b9dc645c4-9pj5t" event={"ID":"174b7c35-bd90-4386-a01d-b20d986df7e5","Type":"ContainerStarted","Data":"e0c592366b5ef63052d61e7ed67660df6fb54cc953bb9c3514427c2726ae04a9"} Dec 08 17:55:17 crc kubenswrapper[5118]: I1208 17:55:17.506763 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-c9c86658-4qchz" event={"ID":"1899106f-2682-474e-ad41-4dd00dbc7d4b","Type":"ContainerStarted","Data":"1ff35b59333394c98d2f3df1bede17bded5937bf35e3e28941b364dccb236ed3"} Dec 08 17:55:17 crc kubenswrapper[5118]: I1208 17:55:17.507999 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-68bdb49cbf-m2cdr" event={"ID":"eae302b5-bcca-41b8-9f24-34be44dd7f83","Type":"ContainerStarted","Data":"a3008cb88c503a6d26be96167f587bdda69dc07ebc987e9a8d0f4d694e66272e"} Dec 08 17:55:17 crc kubenswrapper[5118]: I1208 17:55:17.508903 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b9dc645c4-9dkcm" event={"ID":"b0b7331f-5f3a-41e7-84d0-64a9aa478c60","Type":"ContainerStarted","Data":"f73dfe933a483271a8d1b8d72606f9def0bca48b5bba88daf7f3e1286897c5e5"} Dec 08 17:55:29 crc kubenswrapper[5118]: I1208 17:55:29.798225 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-qtkx9"] Dec 08 17:55:29 crc kubenswrapper[5118]: I1208 17:55:29.804986 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-qtkx9" Dec 08 17:55:29 crc kubenswrapper[5118]: I1208 17:55:29.805667 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-qtkx9"] Dec 08 17:55:29 crc kubenswrapper[5118]: I1208 17:55:29.807769 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager-operator\"/\"openshift-service-ca.crt\"" Dec 08 17:55:29 crc kubenswrapper[5118]: I1208 17:55:29.810141 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager-operator\"/\"kube-root-ca.crt\"" Dec 08 17:55:29 crc kubenswrapper[5118]: I1208 17:55:29.810334 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager-operator\"/\"cert-manager-operator-controller-manager-dockercfg-r7685\"" Dec 08 17:55:29 crc kubenswrapper[5118]: I1208 17:55:29.924801 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4356ed35-799c-4e39-a660-872291edf6cc-tmp\") pod \"cert-manager-operator-controller-manager-64c74584c4-qtkx9\" (UID: \"4356ed35-799c-4e39-a660-872291edf6cc\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-qtkx9" Dec 08 17:55:29 crc kubenswrapper[5118]: I1208 17:55:29.925012 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4hph\" (UniqueName: \"kubernetes.io/projected/4356ed35-799c-4e39-a660-872291edf6cc-kube-api-access-p4hph\") pod \"cert-manager-operator-controller-manager-64c74584c4-qtkx9\" (UID: \"4356ed35-799c-4e39-a660-872291edf6cc\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-qtkx9" Dec 08 17:55:30 crc kubenswrapper[5118]: I1208 17:55:30.026062 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-p4hph\" (UniqueName: \"kubernetes.io/projected/4356ed35-799c-4e39-a660-872291edf6cc-kube-api-access-p4hph\") pod \"cert-manager-operator-controller-manager-64c74584c4-qtkx9\" (UID: \"4356ed35-799c-4e39-a660-872291edf6cc\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-qtkx9" Dec 08 17:55:30 crc kubenswrapper[5118]: I1208 17:55:30.026149 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4356ed35-799c-4e39-a660-872291edf6cc-tmp\") pod \"cert-manager-operator-controller-manager-64c74584c4-qtkx9\" (UID: \"4356ed35-799c-4e39-a660-872291edf6cc\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-qtkx9" Dec 08 17:55:30 crc kubenswrapper[5118]: I1208 17:55:30.026807 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4356ed35-799c-4e39-a660-872291edf6cc-tmp\") pod \"cert-manager-operator-controller-manager-64c74584c4-qtkx9\" (UID: \"4356ed35-799c-4e39-a660-872291edf6cc\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-qtkx9" Dec 08 17:55:30 crc kubenswrapper[5118]: I1208 17:55:30.051658 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4hph\" (UniqueName: \"kubernetes.io/projected/4356ed35-799c-4e39-a660-872291edf6cc-kube-api-access-p4hph\") pod \"cert-manager-operator-controller-manager-64c74584c4-qtkx9\" (UID: \"4356ed35-799c-4e39-a660-872291edf6cc\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-qtkx9" Dec 08 17:55:30 crc kubenswrapper[5118]: I1208 17:55:30.124529 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-qtkx9" Dec 08 17:55:34 crc kubenswrapper[5118]: I1208 17:55:34.327447 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-qtkx9"] Dec 08 17:55:34 crc kubenswrapper[5118]: W1208 17:55:34.334155 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4356ed35_799c_4e39_a660_872291edf6cc.slice/crio-0416812ef58d875c73d2d34ff65e34fefafd98ea0eedd5d0318f579aa326b738 WatchSource:0}: Error finding container 0416812ef58d875c73d2d34ff65e34fefafd98ea0eedd5d0318f579aa326b738: Status 404 returned error can't find the container with id 0416812ef58d875c73d2d34ff65e34fefafd98ea0eedd5d0318f579aa326b738 Dec 08 17:55:34 crc kubenswrapper[5118]: I1208 17:55:34.688277 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-86648f486b-4j9kn" event={"ID":"abff26d8-ffb7-4ac9-b7ac-2eb4e66847fd","Type":"ContainerStarted","Data":"a6e921a5ba104809769c880e83a8812c3ab1a3b2bbde310ecfcbeb3affd5fdde"} Dec 08 17:55:34 crc kubenswrapper[5118]: I1208 17:55:34.690725 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-68bdb49cbf-m2cdr" event={"ID":"eae302b5-bcca-41b8-9f24-34be44dd7f83","Type":"ContainerStarted","Data":"e2707b0ec631237c1f3d9588af2e424c6d0d3ce18b17ddb9269e4060d612e869"} Dec 08 17:55:34 crc kubenswrapper[5118]: I1208 17:55:34.691128 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operators/perses-operator-68bdb49cbf-m2cdr" Dec 08 17:55:34 crc kubenswrapper[5118]: I1208 17:55:34.692965 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-qtkx9" event={"ID":"4356ed35-799c-4e39-a660-872291edf6cc","Type":"ContainerStarted","Data":"0416812ef58d875c73d2d34ff65e34fefafd98ea0eedd5d0318f579aa326b738"} Dec 08 17:55:34 crc kubenswrapper[5118]: I1208 17:55:34.695825 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b9dc645c4-9dkcm" event={"ID":"b0b7331f-5f3a-41e7-84d0-64a9aa478c60","Type":"ContainerStarted","Data":"426df9972d870a50e8502ecade0e46204de2f7f4e3b5f9ad8624f843210941c3"} Dec 08 17:55:34 crc kubenswrapper[5118]: I1208 17:55:34.698317 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-78c97476f4-mg4b2" event={"ID":"a7981d87-d276-41a7-ad7c-d6f0cde8fa7d","Type":"ContainerStarted","Data":"f40ef41bb25dc6cd4cf1a5c0b9ecfb88114d31dd9c24aeb36995c45d18b35f5c"} Dec 08 17:55:34 crc kubenswrapper[5118]: I1208 17:55:34.698955 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operators/observability-operator-78c97476f4-mg4b2" Dec 08 17:55:34 crc kubenswrapper[5118]: I1208 17:55:34.705924 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b9dc645c4-9pj5t" event={"ID":"174b7c35-bd90-4386-a01d-b20d986df7e5","Type":"ContainerStarted","Data":"7ccb2e1b98feb7032144feb4a83c4c0700bb708f82e603bb97a404f0993ecd61"} Dec 08 17:55:34 crc kubenswrapper[5118]: I1208 17:55:34.711404 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-78c97476f4-mg4b2" Dec 08 17:55:34 crc kubenswrapper[5118]: I1208 17:55:34.736800 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b9dc645c4-9dkcm" podStartSLOduration=2.344428665 podStartE2EDuration="19.736782534s" podCreationTimestamp="2025-12-08 17:55:15 +0000 UTC" firstStartedPulling="2025-12-08 17:55:16.701142433 +0000 UTC m=+773.602466527" lastFinishedPulling="2025-12-08 17:55:34.093496302 +0000 UTC m=+790.994820396" observedRunningTime="2025-12-08 17:55:34.732852276 +0000 UTC m=+791.634176360" watchObservedRunningTime="2025-12-08 17:55:34.736782534 +0000 UTC m=+791.638106628" Dec 08 17:55:34 crc kubenswrapper[5118]: I1208 17:55:34.737160 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-86648f486b-4j9kn" podStartSLOduration=1.951755586 podStartE2EDuration="19.737155774s" podCreationTimestamp="2025-12-08 17:55:15 +0000 UTC" firstStartedPulling="2025-12-08 17:55:16.347163836 +0000 UTC m=+773.248487930" lastFinishedPulling="2025-12-08 17:55:34.132564024 +0000 UTC m=+791.033888118" observedRunningTime="2025-12-08 17:55:34.711421328 +0000 UTC m=+791.612745432" watchObservedRunningTime="2025-12-08 17:55:34.737155774 +0000 UTC m=+791.638479868" Dec 08 17:55:34 crc kubenswrapper[5118]: I1208 17:55:34.853105 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b9dc645c4-9pj5t" podStartSLOduration=2.501523825 podStartE2EDuration="19.853086325s" podCreationTimestamp="2025-12-08 17:55:15 +0000 UTC" firstStartedPulling="2025-12-08 17:55:16.814937115 +0000 UTC m=+773.716261209" lastFinishedPulling="2025-12-08 17:55:34.166499615 +0000 UTC m=+791.067823709" observedRunningTime="2025-12-08 17:55:34.842246058 +0000 UTC m=+791.743570152" watchObservedRunningTime="2025-12-08 17:55:34.853086325 +0000 UTC m=+791.754410469" Dec 08 17:55:34 crc kubenswrapper[5118]: I1208 17:55:34.867374 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-78c97476f4-mg4b2" podStartSLOduration=2.6358334 podStartE2EDuration="19.867351406s" podCreationTimestamp="2025-12-08 17:55:15 +0000 UTC" firstStartedPulling="2025-12-08 17:55:16.870575612 +0000 UTC m=+773.771899706" lastFinishedPulling="2025-12-08 17:55:34.102093628 +0000 UTC m=+791.003417712" observedRunningTime="2025-12-08 17:55:34.807387161 +0000 UTC m=+791.708711255" watchObservedRunningTime="2025-12-08 17:55:34.867351406 +0000 UTC m=+791.768675500" Dec 08 17:55:37 crc kubenswrapper[5118]: I1208 17:55:37.736251 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-qtkx9" event={"ID":"4356ed35-799c-4e39-a660-872291edf6cc","Type":"ContainerStarted","Data":"27a686e23c8a0e8c49578a78a4cb970ab3abdb0db1ccd179dbac7c2564389c94"} Dec 08 17:55:37 crc kubenswrapper[5118]: I1208 17:55:37.756846 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-68bdb49cbf-m2cdr" podStartSLOduration=4.598366168 podStartE2EDuration="21.756825733s" podCreationTimestamp="2025-12-08 17:55:16 +0000 UTC" firstStartedPulling="2025-12-08 17:55:16.934256486 +0000 UTC m=+773.835580590" lastFinishedPulling="2025-12-08 17:55:34.092716061 +0000 UTC m=+790.994040155" observedRunningTime="2025-12-08 17:55:34.870075511 +0000 UTC m=+791.771399605" watchObservedRunningTime="2025-12-08 17:55:37.756825733 +0000 UTC m=+794.658149827" Dec 08 17:55:38 crc kubenswrapper[5118]: I1208 17:55:38.442209 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-image-registry/image-registry-66587d64c8-s6hn4" podUID="1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc" containerName="registry" containerID="cri-o://c9dc7606a0b78d2fd8ce9155a8194ba05acaed277dfbf4e936fa94958f67ac28" gracePeriod=30 Dec 08 17:55:38 crc kubenswrapper[5118]: I1208 17:55:38.751198 5118 generic.go:358] "Generic (PLEG): container finished" podID="1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc" containerID="c9dc7606a0b78d2fd8ce9155a8194ba05acaed277dfbf4e936fa94958f67ac28" exitCode=0 Dec 08 17:55:38 crc kubenswrapper[5118]: I1208 17:55:38.751372 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-s6hn4" event={"ID":"1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc","Type":"ContainerDied","Data":"c9dc7606a0b78d2fd8ce9155a8194ba05acaed277dfbf4e936fa94958f67ac28"} Dec 08 17:55:39 crc kubenswrapper[5118]: I1208 17:55:39.293736 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-s6hn4" Dec 08 17:55:39 crc kubenswrapper[5118]: I1208 17:55:39.340937 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-qtkx9" podStartSLOduration=7.110586111 podStartE2EDuration="10.34091574s" podCreationTimestamp="2025-12-08 17:55:29 +0000 UTC" firstStartedPulling="2025-12-08 17:55:34.340908512 +0000 UTC m=+791.242232606" lastFinishedPulling="2025-12-08 17:55:37.571238131 +0000 UTC m=+794.472562235" observedRunningTime="2025-12-08 17:55:37.757380268 +0000 UTC m=+794.658704352" watchObservedRunningTime="2025-12-08 17:55:39.34091574 +0000 UTC m=+796.242239834" Dec 08 17:55:39 crc kubenswrapper[5118]: I1208 17:55:39.410980 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc-trusted-ca\") pod \"1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc\" (UID: \"1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc\") " Dec 08 17:55:39 crc kubenswrapper[5118]: I1208 17:55:39.411051 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc-registry-tls\") pod \"1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc\" (UID: \"1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc\") " Dec 08 17:55:39 crc kubenswrapper[5118]: I1208 17:55:39.411288 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc\" (UID: \"1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc\") " Dec 08 17:55:39 crc kubenswrapper[5118]: I1208 17:55:39.411342 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dhskb\" (UniqueName: \"kubernetes.io/projected/1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc-kube-api-access-dhskb\") pod \"1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc\" (UID: \"1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc\") " Dec 08 17:55:39 crc kubenswrapper[5118]: I1208 17:55:39.411392 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc-registry-certificates\") pod \"1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc\" (UID: \"1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc\") " Dec 08 17:55:39 crc kubenswrapper[5118]: I1208 17:55:39.411424 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc-installation-pull-secrets\") pod \"1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc\" (UID: \"1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc\") " Dec 08 17:55:39 crc kubenswrapper[5118]: I1208 17:55:39.411466 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc-bound-sa-token\") pod \"1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc\" (UID: \"1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc\") " Dec 08 17:55:39 crc kubenswrapper[5118]: I1208 17:55:39.411508 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc-ca-trust-extracted\") pod \"1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc\" (UID: \"1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc\") " Dec 08 17:55:39 crc kubenswrapper[5118]: I1208 17:55:39.411568 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc" (UID: "1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:55:39 crc kubenswrapper[5118]: I1208 17:55:39.412046 5118 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 08 17:55:39 crc kubenswrapper[5118]: I1208 17:55:39.412641 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc" (UID: "1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:55:39 crc kubenswrapper[5118]: I1208 17:55:39.422209 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc" (UID: "1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:55:39 crc kubenswrapper[5118]: I1208 17:55:39.424016 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc" (UID: "1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:55:39 crc kubenswrapper[5118]: I1208 17:55:39.425960 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc-kube-api-access-dhskb" (OuterVolumeSpecName: "kube-api-access-dhskb") pod "1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc" (UID: "1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc"). InnerVolumeSpecName "kube-api-access-dhskb". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:55:39 crc kubenswrapper[5118]: I1208 17:55:39.425965 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (OuterVolumeSpecName: "registry-storage") pod "1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc" (UID: "1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc"). InnerVolumeSpecName "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2". PluginName "kubernetes.io/csi", VolumeGIDValue "" Dec 08 17:55:39 crc kubenswrapper[5118]: I1208 17:55:39.432729 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc" (UID: "1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:55:39 crc kubenswrapper[5118]: I1208 17:55:39.441656 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc" (UID: "1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:55:39 crc kubenswrapper[5118]: I1208 17:55:39.513591 5118 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 08 17:55:39 crc kubenswrapper[5118]: I1208 17:55:39.513621 5118 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 08 17:55:39 crc kubenswrapper[5118]: I1208 17:55:39.513631 5118 reconciler_common.go:299] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 08 17:55:39 crc kubenswrapper[5118]: I1208 17:55:39.513639 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dhskb\" (UniqueName: \"kubernetes.io/projected/1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc-kube-api-access-dhskb\") on node \"crc\" DevicePath \"\"" Dec 08 17:55:39 crc kubenswrapper[5118]: I1208 17:55:39.513649 5118 reconciler_common.go:299] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 08 17:55:39 crc kubenswrapper[5118]: I1208 17:55:39.513658 5118 reconciler_common.go:299] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 08 17:55:39 crc kubenswrapper[5118]: I1208 17:55:39.757792 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-s6hn4" event={"ID":"1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc","Type":"ContainerDied","Data":"ef2774eb27b084c192ab2fbfe7c52e1babc8bccadb79956c3c83e557c0e28270"} Dec 08 17:55:39 crc kubenswrapper[5118]: I1208 17:55:39.757859 5118 scope.go:117] "RemoveContainer" containerID="c9dc7606a0b78d2fd8ce9155a8194ba05acaed277dfbf4e936fa94958f67ac28" Dec 08 17:55:39 crc kubenswrapper[5118]: I1208 17:55:39.758047 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-s6hn4" Dec 08 17:55:39 crc kubenswrapper[5118]: I1208 17:55:39.788972 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-s6hn4"] Dec 08 17:55:39 crc kubenswrapper[5118]: I1208 17:55:39.795073 5118 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-s6hn4"] Dec 08 17:55:40 crc kubenswrapper[5118]: I1208 17:55:40.665367 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-7894b5b9b4-wdn4b"] Dec 08 17:55:40 crc kubenswrapper[5118]: I1208 17:55:40.665979 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc" containerName="registry" Dec 08 17:55:40 crc kubenswrapper[5118]: I1208 17:55:40.665997 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc" containerName="registry" Dec 08 17:55:40 crc kubenswrapper[5118]: I1208 17:55:40.666110 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc" containerName="registry" Dec 08 17:55:40 crc kubenswrapper[5118]: I1208 17:55:40.676629 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-7894b5b9b4-wdn4b" Dec 08 17:55:40 crc kubenswrapper[5118]: I1208 17:55:40.679082 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-7894b5b9b4-wdn4b"] Dec 08 17:55:40 crc kubenswrapper[5118]: I1208 17:55:40.679321 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager\"/\"openshift-service-ca.crt\"" Dec 08 17:55:40 crc kubenswrapper[5118]: I1208 17:55:40.680643 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager\"/\"kube-root-ca.crt\"" Dec 08 17:55:40 crc kubenswrapper[5118]: I1208 17:55:40.680763 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-webhook-dockercfg-p6v7n\"" Dec 08 17:55:40 crc kubenswrapper[5118]: I1208 17:55:40.730996 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/72f27276-bf08-481d-ad0b-11f8e684d170-bound-sa-token\") pod \"cert-manager-webhook-7894b5b9b4-wdn4b\" (UID: \"72f27276-bf08-481d-ad0b-11f8e684d170\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-wdn4b" Dec 08 17:55:40 crc kubenswrapper[5118]: I1208 17:55:40.731093 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlq58\" (UniqueName: \"kubernetes.io/projected/72f27276-bf08-481d-ad0b-11f8e684d170-kube-api-access-zlq58\") pod \"cert-manager-webhook-7894b5b9b4-wdn4b\" (UID: \"72f27276-bf08-481d-ad0b-11f8e684d170\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-wdn4b" Dec 08 17:55:40 crc kubenswrapper[5118]: I1208 17:55:40.832183 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zlq58\" (UniqueName: \"kubernetes.io/projected/72f27276-bf08-481d-ad0b-11f8e684d170-kube-api-access-zlq58\") pod \"cert-manager-webhook-7894b5b9b4-wdn4b\" (UID: \"72f27276-bf08-481d-ad0b-11f8e684d170\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-wdn4b" Dec 08 17:55:40 crc kubenswrapper[5118]: I1208 17:55:40.832263 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/72f27276-bf08-481d-ad0b-11f8e684d170-bound-sa-token\") pod \"cert-manager-webhook-7894b5b9b4-wdn4b\" (UID: \"72f27276-bf08-481d-ad0b-11f8e684d170\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-wdn4b" Dec 08 17:55:40 crc kubenswrapper[5118]: I1208 17:55:40.854156 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zlq58\" (UniqueName: \"kubernetes.io/projected/72f27276-bf08-481d-ad0b-11f8e684d170-kube-api-access-zlq58\") pod \"cert-manager-webhook-7894b5b9b4-wdn4b\" (UID: \"72f27276-bf08-481d-ad0b-11f8e684d170\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-wdn4b" Dec 08 17:55:40 crc kubenswrapper[5118]: I1208 17:55:40.855467 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/72f27276-bf08-481d-ad0b-11f8e684d170-bound-sa-token\") pod \"cert-manager-webhook-7894b5b9b4-wdn4b\" (UID: \"72f27276-bf08-481d-ad0b-11f8e684d170\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-wdn4b" Dec 08 17:55:40 crc kubenswrapper[5118]: I1208 17:55:40.995068 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-7894b5b9b4-wdn4b" Dec 08 17:55:41 crc kubenswrapper[5118]: I1208 17:55:41.227045 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-7894b5b9b4-wdn4b"] Dec 08 17:55:41 crc kubenswrapper[5118]: W1208 17:55:41.236170 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod72f27276_bf08_481d_ad0b_11f8e684d170.slice/crio-5dbcf59da1fc73d327f27b4e3f855691b78c63c29755116cda8eef8573c359ee WatchSource:0}: Error finding container 5dbcf59da1fc73d327f27b4e3f855691b78c63c29755116cda8eef8573c359ee: Status 404 returned error can't find the container with id 5dbcf59da1fc73d327f27b4e3f855691b78c63c29755116cda8eef8573c359ee Dec 08 17:55:41 crc kubenswrapper[5118]: I1208 17:55:41.433688 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc" path="/var/lib/kubelet/pods/1a6cf2c2-bdc0-4d0c-b1e5-9c640c87cbfc/volumes" Dec 08 17:55:41 crc kubenswrapper[5118]: I1208 17:55:41.775297 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-c9c86658-4qchz" event={"ID":"1899106f-2682-474e-ad41-4dd00dbc7d4b","Type":"ContainerStarted","Data":"d23e5d578858b72e0366caf0026d96ebb7932ce9d7016adb967d25f47ad151dc"} Dec 08 17:55:41 crc kubenswrapper[5118]: I1208 17:55:41.776680 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-7894b5b9b4-wdn4b" event={"ID":"72f27276-bf08-481d-ad0b-11f8e684d170","Type":"ContainerStarted","Data":"5dbcf59da1fc73d327f27b4e3f855691b78c63c29755116cda8eef8573c359ee"} Dec 08 17:55:41 crc kubenswrapper[5118]: I1208 17:55:41.794840 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/elastic-operator-c9c86658-4qchz" podStartSLOduration=2.287378355 podStartE2EDuration="25.794822664s" podCreationTimestamp="2025-12-08 17:55:16 +0000 UTC" firstStartedPulling="2025-12-08 17:55:17.348627337 +0000 UTC m=+774.249951431" lastFinishedPulling="2025-12-08 17:55:40.856071646 +0000 UTC m=+797.757395740" observedRunningTime="2025-12-08 17:55:41.792788389 +0000 UTC m=+798.694112493" watchObservedRunningTime="2025-12-08 17:55:41.794822664 +0000 UTC m=+798.696146758" Dec 08 17:55:42 crc kubenswrapper[5118]: I1208 17:55:42.049946 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Dec 08 17:55:42 crc kubenswrapper[5118]: I1208 17:55:42.054921 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:55:42 crc kubenswrapper[5118]: I1208 17:55:42.059844 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"elasticsearch-es-unicast-hosts\"" Dec 08 17:55:42 crc kubenswrapper[5118]: I1208 17:55:42.059955 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"elasticsearch-es-scripts\"" Dec 08 17:55:42 crc kubenswrapper[5118]: I1208 17:55:42.060191 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-http-certs-internal\"" Dec 08 17:55:42 crc kubenswrapper[5118]: I1208 17:55:42.060247 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-default-es-transport-certs\"" Dec 08 17:55:42 crc kubenswrapper[5118]: I1208 17:55:42.060491 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-dockercfg-t7fjv\"" Dec 08 17:55:42 crc kubenswrapper[5118]: I1208 17:55:42.060927 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-xpack-file-realm\"" Dec 08 17:55:42 crc kubenswrapper[5118]: I1208 17:55:42.061334 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-remote-ca\"" Dec 08 17:55:42 crc kubenswrapper[5118]: I1208 17:55:42.062518 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-internal-users\"" Dec 08 17:55:42 crc kubenswrapper[5118]: I1208 17:55:42.064421 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-default-es-config\"" Dec 08 17:55:42 crc kubenswrapper[5118]: I1208 17:55:42.068158 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Dec 08 17:55:42 crc kubenswrapper[5118]: I1208 17:55:42.149415 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/72b61c1d-040f-465f-bea8-e024f5879f98-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"72b61c1d-040f-465f-bea8-e024f5879f98\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:55:42 crc kubenswrapper[5118]: I1208 17:55:42.149468 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/72b61c1d-040f-465f-bea8-e024f5879f98-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"72b61c1d-040f-465f-bea8-e024f5879f98\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:55:42 crc kubenswrapper[5118]: I1208 17:55:42.149503 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/72b61c1d-040f-465f-bea8-e024f5879f98-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"72b61c1d-040f-465f-bea8-e024f5879f98\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:55:42 crc kubenswrapper[5118]: I1208 17:55:42.149525 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/72b61c1d-040f-465f-bea8-e024f5879f98-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"72b61c1d-040f-465f-bea8-e024f5879f98\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:55:42 crc kubenswrapper[5118]: I1208 17:55:42.149546 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/72b61c1d-040f-465f-bea8-e024f5879f98-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"72b61c1d-040f-465f-bea8-e024f5879f98\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:55:42 crc kubenswrapper[5118]: I1208 17:55:42.149565 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/72b61c1d-040f-465f-bea8-e024f5879f98-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"72b61c1d-040f-465f-bea8-e024f5879f98\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:55:42 crc kubenswrapper[5118]: I1208 17:55:42.149591 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/72b61c1d-040f-465f-bea8-e024f5879f98-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"72b61c1d-040f-465f-bea8-e024f5879f98\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:55:42 crc kubenswrapper[5118]: I1208 17:55:42.149616 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/72b61c1d-040f-465f-bea8-e024f5879f98-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"72b61c1d-040f-465f-bea8-e024f5879f98\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:55:42 crc kubenswrapper[5118]: I1208 17:55:42.149634 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/72b61c1d-040f-465f-bea8-e024f5879f98-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"72b61c1d-040f-465f-bea8-e024f5879f98\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:55:42 crc kubenswrapper[5118]: I1208 17:55:42.149650 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/72b61c1d-040f-465f-bea8-e024f5879f98-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"72b61c1d-040f-465f-bea8-e024f5879f98\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:55:42 crc kubenswrapper[5118]: I1208 17:55:42.149682 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/72b61c1d-040f-465f-bea8-e024f5879f98-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"72b61c1d-040f-465f-bea8-e024f5879f98\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:55:42 crc kubenswrapper[5118]: I1208 17:55:42.149699 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/72b61c1d-040f-465f-bea8-e024f5879f98-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"72b61c1d-040f-465f-bea8-e024f5879f98\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:55:42 crc kubenswrapper[5118]: I1208 17:55:42.149713 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/72b61c1d-040f-465f-bea8-e024f5879f98-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"72b61c1d-040f-465f-bea8-e024f5879f98\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:55:42 crc kubenswrapper[5118]: I1208 17:55:42.149740 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/72b61c1d-040f-465f-bea8-e024f5879f98-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"72b61c1d-040f-465f-bea8-e024f5879f98\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:55:42 crc kubenswrapper[5118]: I1208 17:55:42.149767 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/72b61c1d-040f-465f-bea8-e024f5879f98-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"72b61c1d-040f-465f-bea8-e024f5879f98\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:55:42 crc kubenswrapper[5118]: I1208 17:55:42.251068 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/72b61c1d-040f-465f-bea8-e024f5879f98-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"72b61c1d-040f-465f-bea8-e024f5879f98\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:55:42 crc kubenswrapper[5118]: I1208 17:55:42.251117 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/72b61c1d-040f-465f-bea8-e024f5879f98-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"72b61c1d-040f-465f-bea8-e024f5879f98\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:55:42 crc kubenswrapper[5118]: I1208 17:55:42.251139 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/72b61c1d-040f-465f-bea8-e024f5879f98-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"72b61c1d-040f-465f-bea8-e024f5879f98\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:55:42 crc kubenswrapper[5118]: I1208 17:55:42.251167 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/72b61c1d-040f-465f-bea8-e024f5879f98-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"72b61c1d-040f-465f-bea8-e024f5879f98\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:55:42 crc kubenswrapper[5118]: I1208 17:55:42.251196 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/72b61c1d-040f-465f-bea8-e024f5879f98-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"72b61c1d-040f-465f-bea8-e024f5879f98\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:55:42 crc kubenswrapper[5118]: I1208 17:55:42.251217 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/72b61c1d-040f-465f-bea8-e024f5879f98-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"72b61c1d-040f-465f-bea8-e024f5879f98\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:55:42 crc kubenswrapper[5118]: I1208 17:55:42.251242 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/72b61c1d-040f-465f-bea8-e024f5879f98-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"72b61c1d-040f-465f-bea8-e024f5879f98\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:55:42 crc kubenswrapper[5118]: I1208 17:55:42.251263 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/72b61c1d-040f-465f-bea8-e024f5879f98-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"72b61c1d-040f-465f-bea8-e024f5879f98\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:55:42 crc kubenswrapper[5118]: I1208 17:55:42.251284 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/72b61c1d-040f-465f-bea8-e024f5879f98-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"72b61c1d-040f-465f-bea8-e024f5879f98\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:55:42 crc kubenswrapper[5118]: I1208 17:55:42.251303 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/72b61c1d-040f-465f-bea8-e024f5879f98-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"72b61c1d-040f-465f-bea8-e024f5879f98\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:55:42 crc kubenswrapper[5118]: I1208 17:55:42.251321 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/72b61c1d-040f-465f-bea8-e024f5879f98-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"72b61c1d-040f-465f-bea8-e024f5879f98\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:55:42 crc kubenswrapper[5118]: I1208 17:55:42.251335 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/72b61c1d-040f-465f-bea8-e024f5879f98-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"72b61c1d-040f-465f-bea8-e024f5879f98\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:55:42 crc kubenswrapper[5118]: I1208 17:55:42.251365 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/72b61c1d-040f-465f-bea8-e024f5879f98-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"72b61c1d-040f-465f-bea8-e024f5879f98\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:55:42 crc kubenswrapper[5118]: I1208 17:55:42.251382 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/72b61c1d-040f-465f-bea8-e024f5879f98-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"72b61c1d-040f-465f-bea8-e024f5879f98\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:55:42 crc kubenswrapper[5118]: I1208 17:55:42.251399 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/72b61c1d-040f-465f-bea8-e024f5879f98-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"72b61c1d-040f-465f-bea8-e024f5879f98\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:55:42 crc kubenswrapper[5118]: I1208 17:55:42.253111 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/72b61c1d-040f-465f-bea8-e024f5879f98-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"72b61c1d-040f-465f-bea8-e024f5879f98\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:55:42 crc kubenswrapper[5118]: I1208 17:55:42.253608 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/72b61c1d-040f-465f-bea8-e024f5879f98-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"72b61c1d-040f-465f-bea8-e024f5879f98\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:55:42 crc kubenswrapper[5118]: I1208 17:55:42.254174 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/72b61c1d-040f-465f-bea8-e024f5879f98-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"72b61c1d-040f-465f-bea8-e024f5879f98\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:55:42 crc kubenswrapper[5118]: I1208 17:55:42.254225 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/72b61c1d-040f-465f-bea8-e024f5879f98-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"72b61c1d-040f-465f-bea8-e024f5879f98\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:55:42 crc kubenswrapper[5118]: I1208 17:55:42.255050 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/72b61c1d-040f-465f-bea8-e024f5879f98-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"72b61c1d-040f-465f-bea8-e024f5879f98\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:55:42 crc kubenswrapper[5118]: I1208 17:55:42.256225 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/72b61c1d-040f-465f-bea8-e024f5879f98-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"72b61c1d-040f-465f-bea8-e024f5879f98\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:55:42 crc kubenswrapper[5118]: I1208 17:55:42.256412 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/72b61c1d-040f-465f-bea8-e024f5879f98-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"72b61c1d-040f-465f-bea8-e024f5879f98\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:55:42 crc kubenswrapper[5118]: I1208 17:55:42.256635 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/72b61c1d-040f-465f-bea8-e024f5879f98-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"72b61c1d-040f-465f-bea8-e024f5879f98\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:55:42 crc kubenswrapper[5118]: I1208 17:55:42.258629 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/72b61c1d-040f-465f-bea8-e024f5879f98-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"72b61c1d-040f-465f-bea8-e024f5879f98\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:55:42 crc kubenswrapper[5118]: I1208 17:55:42.258828 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/72b61c1d-040f-465f-bea8-e024f5879f98-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"72b61c1d-040f-465f-bea8-e024f5879f98\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:55:42 crc kubenswrapper[5118]: I1208 17:55:42.258859 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/72b61c1d-040f-465f-bea8-e024f5879f98-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"72b61c1d-040f-465f-bea8-e024f5879f98\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:55:42 crc kubenswrapper[5118]: I1208 17:55:42.259288 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/72b61c1d-040f-465f-bea8-e024f5879f98-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"72b61c1d-040f-465f-bea8-e024f5879f98\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:55:42 crc kubenswrapper[5118]: I1208 17:55:42.270571 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/72b61c1d-040f-465f-bea8-e024f5879f98-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"72b61c1d-040f-465f-bea8-e024f5879f98\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:55:42 crc kubenswrapper[5118]: I1208 17:55:42.270615 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/72b61c1d-040f-465f-bea8-e024f5879f98-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"72b61c1d-040f-465f-bea8-e024f5879f98\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:55:42 crc kubenswrapper[5118]: I1208 17:55:42.271058 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/72b61c1d-040f-465f-bea8-e024f5879f98-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"72b61c1d-040f-465f-bea8-e024f5879f98\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:55:42 crc kubenswrapper[5118]: I1208 17:55:42.374988 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:55:42 crc kubenswrapper[5118]: I1208 17:55:42.908488 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Dec 08 17:55:43 crc kubenswrapper[5118]: I1208 17:55:43.177671 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-7dbf76d5c8-fdk5q"] Dec 08 17:55:43 crc kubenswrapper[5118]: I1208 17:55:43.184201 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-fdk5q" Dec 08 17:55:43 crc kubenswrapper[5118]: I1208 17:55:43.186044 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-cainjector-dockercfg-ktkxz\"" Dec 08 17:55:43 crc kubenswrapper[5118]: I1208 17:55:43.188853 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7dbf76d5c8-fdk5q"] Dec 08 17:55:43 crc kubenswrapper[5118]: I1208 17:55:43.270611 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42v5j\" (UniqueName: \"kubernetes.io/projected/57678783-1dc9-4366-a2e6-7f8c6321e40f-kube-api-access-42v5j\") pod \"cert-manager-cainjector-7dbf76d5c8-fdk5q\" (UID: \"57678783-1dc9-4366-a2e6-7f8c6321e40f\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-fdk5q" Dec 08 17:55:43 crc kubenswrapper[5118]: I1208 17:55:43.270721 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/57678783-1dc9-4366-a2e6-7f8c6321e40f-bound-sa-token\") pod \"cert-manager-cainjector-7dbf76d5c8-fdk5q\" (UID: \"57678783-1dc9-4366-a2e6-7f8c6321e40f\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-fdk5q" Dec 08 17:55:43 crc kubenswrapper[5118]: I1208 17:55:43.372288 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/57678783-1dc9-4366-a2e6-7f8c6321e40f-bound-sa-token\") pod \"cert-manager-cainjector-7dbf76d5c8-fdk5q\" (UID: \"57678783-1dc9-4366-a2e6-7f8c6321e40f\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-fdk5q" Dec 08 17:55:43 crc kubenswrapper[5118]: I1208 17:55:43.373242 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-42v5j\" (UniqueName: \"kubernetes.io/projected/57678783-1dc9-4366-a2e6-7f8c6321e40f-kube-api-access-42v5j\") pod \"cert-manager-cainjector-7dbf76d5c8-fdk5q\" (UID: \"57678783-1dc9-4366-a2e6-7f8c6321e40f\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-fdk5q" Dec 08 17:55:43 crc kubenswrapper[5118]: I1208 17:55:43.391210 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-42v5j\" (UniqueName: \"kubernetes.io/projected/57678783-1dc9-4366-a2e6-7f8c6321e40f-kube-api-access-42v5j\") pod \"cert-manager-cainjector-7dbf76d5c8-fdk5q\" (UID: \"57678783-1dc9-4366-a2e6-7f8c6321e40f\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-fdk5q" Dec 08 17:55:43 crc kubenswrapper[5118]: I1208 17:55:43.391301 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/57678783-1dc9-4366-a2e6-7f8c6321e40f-bound-sa-token\") pod \"cert-manager-cainjector-7dbf76d5c8-fdk5q\" (UID: \"57678783-1dc9-4366-a2e6-7f8c6321e40f\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-fdk5q" Dec 08 17:55:43 crc kubenswrapper[5118]: I1208 17:55:43.503391 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-fdk5q" Dec 08 17:55:43 crc kubenswrapper[5118]: I1208 17:55:43.827329 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"72b61c1d-040f-465f-bea8-e024f5879f98","Type":"ContainerStarted","Data":"63f14142a29c4e7ca44e46d3a69071a22cbd80cb5b480e16054385c0040ccd60"} Dec 08 17:55:43 crc kubenswrapper[5118]: I1208 17:55:43.948749 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7dbf76d5c8-fdk5q"] Dec 08 17:55:43 crc kubenswrapper[5118]: W1208 17:55:43.958375 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod57678783_1dc9_4366_a2e6_7f8c6321e40f.slice/crio-ca3f80877bb835ca98440ccb2512c749261bb18d46b126546945e35111759e30 WatchSource:0}: Error finding container ca3f80877bb835ca98440ccb2512c749261bb18d46b126546945e35111759e30: Status 404 returned error can't find the container with id ca3f80877bb835ca98440ccb2512c749261bb18d46b126546945e35111759e30 Dec 08 17:55:44 crc kubenswrapper[5118]: I1208 17:55:44.841701 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-fdk5q" event={"ID":"57678783-1dc9-4366-a2e6-7f8c6321e40f","Type":"ContainerStarted","Data":"ca3f80877bb835ca98440ccb2512c749261bb18d46b126546945e35111759e30"} Dec 08 17:55:46 crc kubenswrapper[5118]: I1208 17:55:46.725146 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-68bdb49cbf-m2cdr" Dec 08 17:55:57 crc kubenswrapper[5118]: I1208 17:55:57.917146 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-7894b5b9b4-wdn4b" event={"ID":"72f27276-bf08-481d-ad0b-11f8e684d170","Type":"ContainerStarted","Data":"2218458cd943cea84e02f8cf129e57e4a6c7a7afcfded88bda325166ab6d7003"} Dec 08 17:55:57 crc kubenswrapper[5118]: I1208 17:55:57.917613 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="cert-manager/cert-manager-webhook-7894b5b9b4-wdn4b" Dec 08 17:55:57 crc kubenswrapper[5118]: I1208 17:55:57.918945 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"72b61c1d-040f-465f-bea8-e024f5879f98","Type":"ContainerStarted","Data":"d4b6d296db8cd435cf0480348ffac62552cdd1707b75d603ad65c90c27b92bdb"} Dec 08 17:55:57 crc kubenswrapper[5118]: I1208 17:55:57.921240 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-fdk5q" event={"ID":"57678783-1dc9-4366-a2e6-7f8c6321e40f","Type":"ContainerStarted","Data":"553a98fb20cf9de874843c0161f7190ce7a701827399b43c41548fd593d4c950"} Dec 08 17:55:57 crc kubenswrapper[5118]: I1208 17:55:57.935105 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-7894b5b9b4-wdn4b" podStartSLOduration=2.009533526 podStartE2EDuration="17.93508647s" podCreationTimestamp="2025-12-08 17:55:40 +0000 UTC" firstStartedPulling="2025-12-08 17:55:41.238614233 +0000 UTC m=+798.139938317" lastFinishedPulling="2025-12-08 17:55:57.164167127 +0000 UTC m=+814.065491261" observedRunningTime="2025-12-08 17:55:57.93177702 +0000 UTC m=+814.833101114" watchObservedRunningTime="2025-12-08 17:55:57.93508647 +0000 UTC m=+814.836410564" Dec 08 17:55:57 crc kubenswrapper[5118]: I1208 17:55:57.979846 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-fdk5q" podStartSLOduration=1.6529455610000001 podStartE2EDuration="14.979824138s" podCreationTimestamp="2025-12-08 17:55:43 +0000 UTC" firstStartedPulling="2025-12-08 17:55:43.963259626 +0000 UTC m=+800.864583720" lastFinishedPulling="2025-12-08 17:55:57.290138203 +0000 UTC m=+814.191462297" observedRunningTime="2025-12-08 17:55:57.978385048 +0000 UTC m=+814.879709142" watchObservedRunningTime="2025-12-08 17:55:57.979824138 +0000 UTC m=+814.881148232" Dec 08 17:55:58 crc kubenswrapper[5118]: I1208 17:55:58.212113 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Dec 08 17:55:58 crc kubenswrapper[5118]: I1208 17:55:58.262178 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Dec 08 17:55:59 crc kubenswrapper[5118]: I1208 17:55:59.934301 5118 generic.go:358] "Generic (PLEG): container finished" podID="72b61c1d-040f-465f-bea8-e024f5879f98" containerID="d4b6d296db8cd435cf0480348ffac62552cdd1707b75d603ad65c90c27b92bdb" exitCode=0 Dec 08 17:55:59 crc kubenswrapper[5118]: I1208 17:55:59.934447 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"72b61c1d-040f-465f-bea8-e024f5879f98","Type":"ContainerDied","Data":"d4b6d296db8cd435cf0480348ffac62552cdd1707b75d603ad65c90c27b92bdb"} Dec 08 17:56:00 crc kubenswrapper[5118]: I1208 17:56:00.029667 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858d87f86b-7q2ss"] Dec 08 17:56:00 crc kubenswrapper[5118]: I1208 17:56:00.038275 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858d87f86b-7q2ss"] Dec 08 17:56:00 crc kubenswrapper[5118]: I1208 17:56:00.038428 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858d87f86b-7q2ss" Dec 08 17:56:00 crc kubenswrapper[5118]: I1208 17:56:00.042794 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-dockercfg-pkxzc\"" Dec 08 17:56:00 crc kubenswrapper[5118]: I1208 17:56:00.120769 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/dfea6e7f-7e23-4b85-a7f2-a56ba93e1a62-bound-sa-token\") pod \"cert-manager-858d87f86b-7q2ss\" (UID: \"dfea6e7f-7e23-4b85-a7f2-a56ba93e1a62\") " pod="cert-manager/cert-manager-858d87f86b-7q2ss" Dec 08 17:56:00 crc kubenswrapper[5118]: I1208 17:56:00.120823 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jd2j4\" (UniqueName: \"kubernetes.io/projected/dfea6e7f-7e23-4b85-a7f2-a56ba93e1a62-kube-api-access-jd2j4\") pod \"cert-manager-858d87f86b-7q2ss\" (UID: \"dfea6e7f-7e23-4b85-a7f2-a56ba93e1a62\") " pod="cert-manager/cert-manager-858d87f86b-7q2ss" Dec 08 17:56:00 crc kubenswrapper[5118]: I1208 17:56:00.221810 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/dfea6e7f-7e23-4b85-a7f2-a56ba93e1a62-bound-sa-token\") pod \"cert-manager-858d87f86b-7q2ss\" (UID: \"dfea6e7f-7e23-4b85-a7f2-a56ba93e1a62\") " pod="cert-manager/cert-manager-858d87f86b-7q2ss" Dec 08 17:56:00 crc kubenswrapper[5118]: I1208 17:56:00.222208 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jd2j4\" (UniqueName: \"kubernetes.io/projected/dfea6e7f-7e23-4b85-a7f2-a56ba93e1a62-kube-api-access-jd2j4\") pod \"cert-manager-858d87f86b-7q2ss\" (UID: \"dfea6e7f-7e23-4b85-a7f2-a56ba93e1a62\") " pod="cert-manager/cert-manager-858d87f86b-7q2ss" Dec 08 17:56:00 crc kubenswrapper[5118]: I1208 17:56:00.242619 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/dfea6e7f-7e23-4b85-a7f2-a56ba93e1a62-bound-sa-token\") pod \"cert-manager-858d87f86b-7q2ss\" (UID: \"dfea6e7f-7e23-4b85-a7f2-a56ba93e1a62\") " pod="cert-manager/cert-manager-858d87f86b-7q2ss" Dec 08 17:56:00 crc kubenswrapper[5118]: I1208 17:56:00.243554 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jd2j4\" (UniqueName: \"kubernetes.io/projected/dfea6e7f-7e23-4b85-a7f2-a56ba93e1a62-kube-api-access-jd2j4\") pod \"cert-manager-858d87f86b-7q2ss\" (UID: \"dfea6e7f-7e23-4b85-a7f2-a56ba93e1a62\") " pod="cert-manager/cert-manager-858d87f86b-7q2ss" Dec 08 17:56:00 crc kubenswrapper[5118]: I1208 17:56:00.372968 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858d87f86b-7q2ss" Dec 08 17:56:00 crc kubenswrapper[5118]: I1208 17:56:00.852108 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858d87f86b-7q2ss"] Dec 08 17:56:00 crc kubenswrapper[5118]: W1208 17:56:00.854896 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddfea6e7f_7e23_4b85_a7f2_a56ba93e1a62.slice/crio-58c1590086f577ebd3c8d0d925d56fc467809bc70c4d333f467540b7567c889b WatchSource:0}: Error finding container 58c1590086f577ebd3c8d0d925d56fc467809bc70c4d333f467540b7567c889b: Status 404 returned error can't find the container with id 58c1590086f577ebd3c8d0d925d56fc467809bc70c4d333f467540b7567c889b Dec 08 17:56:00 crc kubenswrapper[5118]: I1208 17:56:00.942614 5118 generic.go:358] "Generic (PLEG): container finished" podID="72b61c1d-040f-465f-bea8-e024f5879f98" containerID="00e2823aa99de0262bb709a524b98c6b94dfff12d05a2b9ba23f536a7d2aa62d" exitCode=0 Dec 08 17:56:00 crc kubenswrapper[5118]: I1208 17:56:00.942720 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"72b61c1d-040f-465f-bea8-e024f5879f98","Type":"ContainerDied","Data":"00e2823aa99de0262bb709a524b98c6b94dfff12d05a2b9ba23f536a7d2aa62d"} Dec 08 17:56:00 crc kubenswrapper[5118]: I1208 17:56:00.943977 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858d87f86b-7q2ss" event={"ID":"dfea6e7f-7e23-4b85-a7f2-a56ba93e1a62","Type":"ContainerStarted","Data":"58c1590086f577ebd3c8d0d925d56fc467809bc70c4d333f467540b7567c889b"} Dec 08 17:56:01 crc kubenswrapper[5118]: I1208 17:56:01.959565 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"72b61c1d-040f-465f-bea8-e024f5879f98","Type":"ContainerStarted","Data":"3476d8e324616206c8c9516ffdb12ae160415d08919044454d72bb5d30b56124"} Dec 08 17:56:01 crc kubenswrapper[5118]: I1208 17:56:01.959911 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:56:01 crc kubenswrapper[5118]: I1208 17:56:01.963968 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858d87f86b-7q2ss" event={"ID":"dfea6e7f-7e23-4b85-a7f2-a56ba93e1a62","Type":"ContainerStarted","Data":"d5feaa00bdb06ec66815c6ce837833ae8342315d70022c5a9709aaf1300bdffb"} Dec 08 17:56:02 crc kubenswrapper[5118]: I1208 17:56:02.010815 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/elasticsearch-es-default-0" podStartSLOduration=5.27535584 podStartE2EDuration="20.010783007s" podCreationTimestamp="2025-12-08 17:55:42 +0000 UTC" firstStartedPulling="2025-12-08 17:55:42.920601276 +0000 UTC m=+799.821925360" lastFinishedPulling="2025-12-08 17:55:57.656028393 +0000 UTC m=+814.557352527" observedRunningTime="2025-12-08 17:56:02.004925816 +0000 UTC m=+818.906249930" watchObservedRunningTime="2025-12-08 17:56:02.010783007 +0000 UTC m=+818.912107151" Dec 08 17:56:02 crc kubenswrapper[5118]: I1208 17:56:02.026273 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858d87f86b-7q2ss" podStartSLOduration=3.026249221 podStartE2EDuration="3.026249221s" podCreationTimestamp="2025-12-08 17:55:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:56:02.020277267 +0000 UTC m=+818.921601361" watchObservedRunningTime="2025-12-08 17:56:02.026249221 +0000 UTC m=+818.927573335" Dec 08 17:56:03 crc kubenswrapper[5118]: I1208 17:56:03.930903 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-7894b5b9b4-wdn4b" Dec 08 17:56:07 crc kubenswrapper[5118]: I1208 17:56:07.490288 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/infrawatch-operators-xmhcm"] Dec 08 17:56:09 crc kubenswrapper[5118]: I1208 17:56:09.391710 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-xmhcm"] Dec 08 17:56:09 crc kubenswrapper[5118]: I1208 17:56:09.391926 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-xmhcm" Dec 08 17:56:09 crc kubenswrapper[5118]: I1208 17:56:09.394546 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"infrawatch-operators-dockercfg-bcx4t\"" Dec 08 17:56:09 crc kubenswrapper[5118]: I1208 17:56:09.456561 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbzcn\" (UniqueName: \"kubernetes.io/projected/86a460fd-a75a-45d8-8022-1a3aab4c30fd-kube-api-access-vbzcn\") pod \"infrawatch-operators-xmhcm\" (UID: \"86a460fd-a75a-45d8-8022-1a3aab4c30fd\") " pod="service-telemetry/infrawatch-operators-xmhcm" Dec 08 17:56:09 crc kubenswrapper[5118]: I1208 17:56:09.558260 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vbzcn\" (UniqueName: \"kubernetes.io/projected/86a460fd-a75a-45d8-8022-1a3aab4c30fd-kube-api-access-vbzcn\") pod \"infrawatch-operators-xmhcm\" (UID: \"86a460fd-a75a-45d8-8022-1a3aab4c30fd\") " pod="service-telemetry/infrawatch-operators-xmhcm" Dec 08 17:56:09 crc kubenswrapper[5118]: I1208 17:56:09.576699 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbzcn\" (UniqueName: \"kubernetes.io/projected/86a460fd-a75a-45d8-8022-1a3aab4c30fd-kube-api-access-vbzcn\") pod \"infrawatch-operators-xmhcm\" (UID: \"86a460fd-a75a-45d8-8022-1a3aab4c30fd\") " pod="service-telemetry/infrawatch-operators-xmhcm" Dec 08 17:56:09 crc kubenswrapper[5118]: I1208 17:56:09.711808 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-xmhcm" Dec 08 17:56:09 crc kubenswrapper[5118]: I1208 17:56:09.941582 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-xmhcm"] Dec 08 17:56:10 crc kubenswrapper[5118]: I1208 17:56:10.010519 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-xmhcm" event={"ID":"86a460fd-a75a-45d8-8022-1a3aab4c30fd","Type":"ContainerStarted","Data":"a97f426e75bd96aa15e7c6c95fd4f97e212f85aa5b5354f651fc378e78e1100c"} Dec 08 17:56:11 crc kubenswrapper[5118]: I1208 17:56:11.882357 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-xmhcm"] Dec 08 17:56:12 crc kubenswrapper[5118]: I1208 17:56:12.686289 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/infrawatch-operators-tv99j"] Dec 08 17:56:12 crc kubenswrapper[5118]: I1208 17:56:12.927035 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-tv99j"] Dec 08 17:56:12 crc kubenswrapper[5118]: I1208 17:56:12.927216 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-tv99j" Dec 08 17:56:13 crc kubenswrapper[5118]: I1208 17:56:13.008809 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkfp7\" (UniqueName: \"kubernetes.io/projected/020b4835-c362-478d-b714-bb42757ae9e2-kube-api-access-rkfp7\") pod \"infrawatch-operators-tv99j\" (UID: \"020b4835-c362-478d-b714-bb42757ae9e2\") " pod="service-telemetry/infrawatch-operators-tv99j" Dec 08 17:56:13 crc kubenswrapper[5118]: I1208 17:56:13.030846 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-xmhcm" event={"ID":"86a460fd-a75a-45d8-8022-1a3aab4c30fd","Type":"ContainerStarted","Data":"0cae898899b9dba66206a82a8886b32f7d8fc5b2e24fe80e8865ff583577823c"} Dec 08 17:56:13 crc kubenswrapper[5118]: I1208 17:56:13.031112 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/infrawatch-operators-xmhcm" podUID="86a460fd-a75a-45d8-8022-1a3aab4c30fd" containerName="registry-server" containerID="cri-o://0cae898899b9dba66206a82a8886b32f7d8fc5b2e24fe80e8865ff583577823c" gracePeriod=2 Dec 08 17:56:13 crc kubenswrapper[5118]: I1208 17:56:13.050844 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/infrawatch-operators-xmhcm" podStartSLOduration=3.985399188 podStartE2EDuration="6.050821653s" podCreationTimestamp="2025-12-08 17:56:07 +0000 UTC" firstStartedPulling="2025-12-08 17:56:09.956302129 +0000 UTC m=+826.857626223" lastFinishedPulling="2025-12-08 17:56:12.021724594 +0000 UTC m=+828.923048688" observedRunningTime="2025-12-08 17:56:13.05002807 +0000 UTC m=+829.951352184" watchObservedRunningTime="2025-12-08 17:56:13.050821653 +0000 UTC m=+829.952145757" Dec 08 17:56:13 crc kubenswrapper[5118]: I1208 17:56:13.064439 5118 prober.go:120] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="72b61c1d-040f-465f-bea8-e024f5879f98" containerName="elasticsearch" probeResult="failure" output=< Dec 08 17:56:13 crc kubenswrapper[5118]: {"timestamp": "2025-12-08T17:56:13+00:00", "message": "readiness probe failed", "curl_rc": "7"} Dec 08 17:56:13 crc kubenswrapper[5118]: > Dec 08 17:56:13 crc kubenswrapper[5118]: I1208 17:56:13.110474 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rkfp7\" (UniqueName: \"kubernetes.io/projected/020b4835-c362-478d-b714-bb42757ae9e2-kube-api-access-rkfp7\") pod \"infrawatch-operators-tv99j\" (UID: \"020b4835-c362-478d-b714-bb42757ae9e2\") " pod="service-telemetry/infrawatch-operators-tv99j" Dec 08 17:56:13 crc kubenswrapper[5118]: I1208 17:56:13.142434 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rkfp7\" (UniqueName: \"kubernetes.io/projected/020b4835-c362-478d-b714-bb42757ae9e2-kube-api-access-rkfp7\") pod \"infrawatch-operators-tv99j\" (UID: \"020b4835-c362-478d-b714-bb42757ae9e2\") " pod="service-telemetry/infrawatch-operators-tv99j" Dec 08 17:56:13 crc kubenswrapper[5118]: I1208 17:56:13.250828 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-tv99j" Dec 08 17:56:13 crc kubenswrapper[5118]: I1208 17:56:13.521350 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-xmhcm" Dec 08 17:56:13 crc kubenswrapper[5118]: I1208 17:56:13.616648 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vbzcn\" (UniqueName: \"kubernetes.io/projected/86a460fd-a75a-45d8-8022-1a3aab4c30fd-kube-api-access-vbzcn\") pod \"86a460fd-a75a-45d8-8022-1a3aab4c30fd\" (UID: \"86a460fd-a75a-45d8-8022-1a3aab4c30fd\") " Dec 08 17:56:13 crc kubenswrapper[5118]: I1208 17:56:13.623522 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86a460fd-a75a-45d8-8022-1a3aab4c30fd-kube-api-access-vbzcn" (OuterVolumeSpecName: "kube-api-access-vbzcn") pod "86a460fd-a75a-45d8-8022-1a3aab4c30fd" (UID: "86a460fd-a75a-45d8-8022-1a3aab4c30fd"). InnerVolumeSpecName "kube-api-access-vbzcn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:56:13 crc kubenswrapper[5118]: I1208 17:56:13.717838 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vbzcn\" (UniqueName: \"kubernetes.io/projected/86a460fd-a75a-45d8-8022-1a3aab4c30fd-kube-api-access-vbzcn\") on node \"crc\" DevicePath \"\"" Dec 08 17:56:13 crc kubenswrapper[5118]: I1208 17:56:13.819724 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-tv99j"] Dec 08 17:56:13 crc kubenswrapper[5118]: W1208 17:56:13.821160 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod020b4835_c362_478d_b714_bb42757ae9e2.slice/crio-ac36bb4df55ec56255ddd967453b3fbdc2364a3df344521f0809cd33d0681223 WatchSource:0}: Error finding container ac36bb4df55ec56255ddd967453b3fbdc2364a3df344521f0809cd33d0681223: Status 404 returned error can't find the container with id ac36bb4df55ec56255ddd967453b3fbdc2364a3df344521f0809cd33d0681223 Dec 08 17:56:14 crc kubenswrapper[5118]: I1208 17:56:14.044132 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-tv99j" event={"ID":"020b4835-c362-478d-b714-bb42757ae9e2","Type":"ContainerStarted","Data":"ac36bb4df55ec56255ddd967453b3fbdc2364a3df344521f0809cd33d0681223"} Dec 08 17:56:14 crc kubenswrapper[5118]: I1208 17:56:14.045605 5118 generic.go:358] "Generic (PLEG): container finished" podID="86a460fd-a75a-45d8-8022-1a3aab4c30fd" containerID="0cae898899b9dba66206a82a8886b32f7d8fc5b2e24fe80e8865ff583577823c" exitCode=0 Dec 08 17:56:14 crc kubenswrapper[5118]: I1208 17:56:14.045653 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-xmhcm" event={"ID":"86a460fd-a75a-45d8-8022-1a3aab4c30fd","Type":"ContainerDied","Data":"0cae898899b9dba66206a82a8886b32f7d8fc5b2e24fe80e8865ff583577823c"} Dec 08 17:56:14 crc kubenswrapper[5118]: I1208 17:56:14.045695 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-xmhcm" event={"ID":"86a460fd-a75a-45d8-8022-1a3aab4c30fd","Type":"ContainerDied","Data":"a97f426e75bd96aa15e7c6c95fd4f97e212f85aa5b5354f651fc378e78e1100c"} Dec 08 17:56:14 crc kubenswrapper[5118]: I1208 17:56:14.045716 5118 scope.go:117] "RemoveContainer" containerID="0cae898899b9dba66206a82a8886b32f7d8fc5b2e24fe80e8865ff583577823c" Dec 08 17:56:14 crc kubenswrapper[5118]: I1208 17:56:14.046097 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-xmhcm" Dec 08 17:56:14 crc kubenswrapper[5118]: I1208 17:56:14.063599 5118 scope.go:117] "RemoveContainer" containerID="0cae898899b9dba66206a82a8886b32f7d8fc5b2e24fe80e8865ff583577823c" Dec 08 17:56:14 crc kubenswrapper[5118]: E1208 17:56:14.064037 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0cae898899b9dba66206a82a8886b32f7d8fc5b2e24fe80e8865ff583577823c\": container with ID starting with 0cae898899b9dba66206a82a8886b32f7d8fc5b2e24fe80e8865ff583577823c not found: ID does not exist" containerID="0cae898899b9dba66206a82a8886b32f7d8fc5b2e24fe80e8865ff583577823c" Dec 08 17:56:14 crc kubenswrapper[5118]: I1208 17:56:14.064069 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0cae898899b9dba66206a82a8886b32f7d8fc5b2e24fe80e8865ff583577823c"} err="failed to get container status \"0cae898899b9dba66206a82a8886b32f7d8fc5b2e24fe80e8865ff583577823c\": rpc error: code = NotFound desc = could not find container \"0cae898899b9dba66206a82a8886b32f7d8fc5b2e24fe80e8865ff583577823c\": container with ID starting with 0cae898899b9dba66206a82a8886b32f7d8fc5b2e24fe80e8865ff583577823c not found: ID does not exist" Dec 08 17:56:14 crc kubenswrapper[5118]: I1208 17:56:14.085966 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-xmhcm"] Dec 08 17:56:14 crc kubenswrapper[5118]: I1208 17:56:14.092068 5118 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/infrawatch-operators-xmhcm"] Dec 08 17:56:15 crc kubenswrapper[5118]: I1208 17:56:15.057923 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-tv99j" event={"ID":"020b4835-c362-478d-b714-bb42757ae9e2","Type":"ContainerStarted","Data":"7da1c86e59effcad5e9275193a79e0174074938b2d66280a112539c2a4e2f482"} Dec 08 17:56:15 crc kubenswrapper[5118]: I1208 17:56:15.079635 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/infrawatch-operators-tv99j" podStartSLOduration=2.691435401 podStartE2EDuration="3.079605892s" podCreationTimestamp="2025-12-08 17:56:12 +0000 UTC" firstStartedPulling="2025-12-08 17:56:13.82297601 +0000 UTC m=+830.724300134" lastFinishedPulling="2025-12-08 17:56:14.211146531 +0000 UTC m=+831.112470625" observedRunningTime="2025-12-08 17:56:15.075045466 +0000 UTC m=+831.976369600" watchObservedRunningTime="2025-12-08 17:56:15.079605892 +0000 UTC m=+831.980930026" Dec 08 17:56:15 crc kubenswrapper[5118]: I1208 17:56:15.438690 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86a460fd-a75a-45d8-8022-1a3aab4c30fd" path="/var/lib/kubelet/pods/86a460fd-a75a-45d8-8022-1a3aab4c30fd/volumes" Dec 08 17:56:18 crc kubenswrapper[5118]: I1208 17:56:18.513032 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:56:23 crc kubenswrapper[5118]: I1208 17:56:23.251123 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="service-telemetry/infrawatch-operators-tv99j" Dec 08 17:56:23 crc kubenswrapper[5118]: I1208 17:56:23.252490 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="service-telemetry/infrawatch-operators-tv99j" Dec 08 17:56:23 crc kubenswrapper[5118]: I1208 17:56:23.278442 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="service-telemetry/infrawatch-operators-tv99j" Dec 08 17:56:24 crc kubenswrapper[5118]: I1208 17:56:24.144246 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/infrawatch-operators-tv99j" Dec 08 17:56:27 crc kubenswrapper[5118]: I1208 17:56:27.951289 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/36ffb4ab4bfe83a910ab52ec1870308fea799225a9f1157962b08e8113ms9qq"] Dec 08 17:56:27 crc kubenswrapper[5118]: I1208 17:56:27.952472 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="86a460fd-a75a-45d8-8022-1a3aab4c30fd" containerName="registry-server" Dec 08 17:56:27 crc kubenswrapper[5118]: I1208 17:56:27.952490 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="86a460fd-a75a-45d8-8022-1a3aab4c30fd" containerName="registry-server" Dec 08 17:56:27 crc kubenswrapper[5118]: I1208 17:56:27.952605 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="86a460fd-a75a-45d8-8022-1a3aab4c30fd" containerName="registry-server" Dec 08 17:56:28 crc kubenswrapper[5118]: I1208 17:56:28.479136 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/36ffb4ab4bfe83a910ab52ec1870308fea799225a9f1157962b08e8113ms9qq"] Dec 08 17:56:28 crc kubenswrapper[5118]: I1208 17:56:28.479312 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/36ffb4ab4bfe83a910ab52ec1870308fea799225a9f1157962b08e8113ms9qq" Dec 08 17:56:28 crc kubenswrapper[5118]: I1208 17:56:28.524245 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zj26\" (UniqueName: \"kubernetes.io/projected/8dfcd1bd-ac9d-4eba-b160-b7f4335fb440-kube-api-access-4zj26\") pod \"36ffb4ab4bfe83a910ab52ec1870308fea799225a9f1157962b08e8113ms9qq\" (UID: \"8dfcd1bd-ac9d-4eba-b160-b7f4335fb440\") " pod="service-telemetry/36ffb4ab4bfe83a910ab52ec1870308fea799225a9f1157962b08e8113ms9qq" Dec 08 17:56:28 crc kubenswrapper[5118]: I1208 17:56:28.524292 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8dfcd1bd-ac9d-4eba-b160-b7f4335fb440-bundle\") pod \"36ffb4ab4bfe83a910ab52ec1870308fea799225a9f1157962b08e8113ms9qq\" (UID: \"8dfcd1bd-ac9d-4eba-b160-b7f4335fb440\") " pod="service-telemetry/36ffb4ab4bfe83a910ab52ec1870308fea799225a9f1157962b08e8113ms9qq" Dec 08 17:56:28 crc kubenswrapper[5118]: I1208 17:56:28.524367 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8dfcd1bd-ac9d-4eba-b160-b7f4335fb440-util\") pod \"36ffb4ab4bfe83a910ab52ec1870308fea799225a9f1157962b08e8113ms9qq\" (UID: \"8dfcd1bd-ac9d-4eba-b160-b7f4335fb440\") " pod="service-telemetry/36ffb4ab4bfe83a910ab52ec1870308fea799225a9f1157962b08e8113ms9qq" Dec 08 17:56:28 crc kubenswrapper[5118]: I1208 17:56:28.626038 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8dfcd1bd-ac9d-4eba-b160-b7f4335fb440-util\") pod \"36ffb4ab4bfe83a910ab52ec1870308fea799225a9f1157962b08e8113ms9qq\" (UID: \"8dfcd1bd-ac9d-4eba-b160-b7f4335fb440\") " pod="service-telemetry/36ffb4ab4bfe83a910ab52ec1870308fea799225a9f1157962b08e8113ms9qq" Dec 08 17:56:28 crc kubenswrapper[5118]: I1208 17:56:28.626176 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4zj26\" (UniqueName: \"kubernetes.io/projected/8dfcd1bd-ac9d-4eba-b160-b7f4335fb440-kube-api-access-4zj26\") pod \"36ffb4ab4bfe83a910ab52ec1870308fea799225a9f1157962b08e8113ms9qq\" (UID: \"8dfcd1bd-ac9d-4eba-b160-b7f4335fb440\") " pod="service-telemetry/36ffb4ab4bfe83a910ab52ec1870308fea799225a9f1157962b08e8113ms9qq" Dec 08 17:56:28 crc kubenswrapper[5118]: I1208 17:56:28.626496 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8dfcd1bd-ac9d-4eba-b160-b7f4335fb440-bundle\") pod \"36ffb4ab4bfe83a910ab52ec1870308fea799225a9f1157962b08e8113ms9qq\" (UID: \"8dfcd1bd-ac9d-4eba-b160-b7f4335fb440\") " pod="service-telemetry/36ffb4ab4bfe83a910ab52ec1870308fea799225a9f1157962b08e8113ms9qq" Dec 08 17:56:28 crc kubenswrapper[5118]: I1208 17:56:28.626731 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8dfcd1bd-ac9d-4eba-b160-b7f4335fb440-util\") pod \"36ffb4ab4bfe83a910ab52ec1870308fea799225a9f1157962b08e8113ms9qq\" (UID: \"8dfcd1bd-ac9d-4eba-b160-b7f4335fb440\") " pod="service-telemetry/36ffb4ab4bfe83a910ab52ec1870308fea799225a9f1157962b08e8113ms9qq" Dec 08 17:56:28 crc kubenswrapper[5118]: I1208 17:56:28.627048 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8dfcd1bd-ac9d-4eba-b160-b7f4335fb440-bundle\") pod \"36ffb4ab4bfe83a910ab52ec1870308fea799225a9f1157962b08e8113ms9qq\" (UID: \"8dfcd1bd-ac9d-4eba-b160-b7f4335fb440\") " pod="service-telemetry/36ffb4ab4bfe83a910ab52ec1870308fea799225a9f1157962b08e8113ms9qq" Dec 08 17:56:28 crc kubenswrapper[5118]: I1208 17:56:28.663992 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zj26\" (UniqueName: \"kubernetes.io/projected/8dfcd1bd-ac9d-4eba-b160-b7f4335fb440-kube-api-access-4zj26\") pod \"36ffb4ab4bfe83a910ab52ec1870308fea799225a9f1157962b08e8113ms9qq\" (UID: \"8dfcd1bd-ac9d-4eba-b160-b7f4335fb440\") " pod="service-telemetry/36ffb4ab4bfe83a910ab52ec1870308fea799225a9f1157962b08e8113ms9qq" Dec 08 17:56:28 crc kubenswrapper[5118]: I1208 17:56:28.732120 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/f308c3282bd783e18badba37dad473f984d0c04be601135745fecb7682f55kx"] Dec 08 17:56:28 crc kubenswrapper[5118]: I1208 17:56:28.806694 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/36ffb4ab4bfe83a910ab52ec1870308fea799225a9f1157962b08e8113ms9qq" Dec 08 17:56:29 crc kubenswrapper[5118]: W1208 17:56:29.226914 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8dfcd1bd_ac9d_4eba_b160_b7f4335fb440.slice/crio-3c2d2aabec3ddf467be563ae844cae07e4cb0c8bd4763b2ce84272a0a197550d WatchSource:0}: Error finding container 3c2d2aabec3ddf467be563ae844cae07e4cb0c8bd4763b2ce84272a0a197550d: Status 404 returned error can't find the container with id 3c2d2aabec3ddf467be563ae844cae07e4cb0c8bd4763b2ce84272a0a197550d Dec 08 17:56:29 crc kubenswrapper[5118]: I1208 17:56:29.279592 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/f308c3282bd783e18badba37dad473f984d0c04be601135745fecb7682f55kx"] Dec 08 17:56:29 crc kubenswrapper[5118]: I1208 17:56:29.279746 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/36ffb4ab4bfe83a910ab52ec1870308fea799225a9f1157962b08e8113ms9qq"] Dec 08 17:56:29 crc kubenswrapper[5118]: I1208 17:56:29.279862 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/f308c3282bd783e18badba37dad473f984d0c04be601135745fecb7682f55kx" Dec 08 17:56:29 crc kubenswrapper[5118]: I1208 17:56:29.337761 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f97402a7-57a3-4f4a-af9f-478d646d2cbc-bundle\") pod \"f308c3282bd783e18badba37dad473f984d0c04be601135745fecb7682f55kx\" (UID: \"f97402a7-57a3-4f4a-af9f-478d646d2cbc\") " pod="service-telemetry/f308c3282bd783e18badba37dad473f984d0c04be601135745fecb7682f55kx" Dec 08 17:56:29 crc kubenswrapper[5118]: I1208 17:56:29.338547 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f97402a7-57a3-4f4a-af9f-478d646d2cbc-util\") pod \"f308c3282bd783e18badba37dad473f984d0c04be601135745fecb7682f55kx\" (UID: \"f97402a7-57a3-4f4a-af9f-478d646d2cbc\") " pod="service-telemetry/f308c3282bd783e18badba37dad473f984d0c04be601135745fecb7682f55kx" Dec 08 17:56:29 crc kubenswrapper[5118]: I1208 17:56:29.338768 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46hts\" (UniqueName: \"kubernetes.io/projected/f97402a7-57a3-4f4a-af9f-478d646d2cbc-kube-api-access-46hts\") pod \"f308c3282bd783e18badba37dad473f984d0c04be601135745fecb7682f55kx\" (UID: \"f97402a7-57a3-4f4a-af9f-478d646d2cbc\") " pod="service-telemetry/f308c3282bd783e18badba37dad473f984d0c04be601135745fecb7682f55kx" Dec 08 17:56:29 crc kubenswrapper[5118]: I1208 17:56:29.440013 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-46hts\" (UniqueName: \"kubernetes.io/projected/f97402a7-57a3-4f4a-af9f-478d646d2cbc-kube-api-access-46hts\") pod \"f308c3282bd783e18badba37dad473f984d0c04be601135745fecb7682f55kx\" (UID: \"f97402a7-57a3-4f4a-af9f-478d646d2cbc\") " pod="service-telemetry/f308c3282bd783e18badba37dad473f984d0c04be601135745fecb7682f55kx" Dec 08 17:56:29 crc kubenswrapper[5118]: I1208 17:56:29.440174 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f97402a7-57a3-4f4a-af9f-478d646d2cbc-bundle\") pod \"f308c3282bd783e18badba37dad473f984d0c04be601135745fecb7682f55kx\" (UID: \"f97402a7-57a3-4f4a-af9f-478d646d2cbc\") " pod="service-telemetry/f308c3282bd783e18badba37dad473f984d0c04be601135745fecb7682f55kx" Dec 08 17:56:29 crc kubenswrapper[5118]: I1208 17:56:29.440203 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f97402a7-57a3-4f4a-af9f-478d646d2cbc-util\") pod \"f308c3282bd783e18badba37dad473f984d0c04be601135745fecb7682f55kx\" (UID: \"f97402a7-57a3-4f4a-af9f-478d646d2cbc\") " pod="service-telemetry/f308c3282bd783e18badba37dad473f984d0c04be601135745fecb7682f55kx" Dec 08 17:56:29 crc kubenswrapper[5118]: I1208 17:56:29.440862 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f97402a7-57a3-4f4a-af9f-478d646d2cbc-bundle\") pod \"f308c3282bd783e18badba37dad473f984d0c04be601135745fecb7682f55kx\" (UID: \"f97402a7-57a3-4f4a-af9f-478d646d2cbc\") " pod="service-telemetry/f308c3282bd783e18badba37dad473f984d0c04be601135745fecb7682f55kx" Dec 08 17:56:29 crc kubenswrapper[5118]: I1208 17:56:29.440905 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f97402a7-57a3-4f4a-af9f-478d646d2cbc-util\") pod \"f308c3282bd783e18badba37dad473f984d0c04be601135745fecb7682f55kx\" (UID: \"f97402a7-57a3-4f4a-af9f-478d646d2cbc\") " pod="service-telemetry/f308c3282bd783e18badba37dad473f984d0c04be601135745fecb7682f55kx" Dec 08 17:56:29 crc kubenswrapper[5118]: I1208 17:56:29.461284 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-46hts\" (UniqueName: \"kubernetes.io/projected/f97402a7-57a3-4f4a-af9f-478d646d2cbc-kube-api-access-46hts\") pod \"f308c3282bd783e18badba37dad473f984d0c04be601135745fecb7682f55kx\" (UID: \"f97402a7-57a3-4f4a-af9f-478d646d2cbc\") " pod="service-telemetry/f308c3282bd783e18badba37dad473f984d0c04be601135745fecb7682f55kx" Dec 08 17:56:29 crc kubenswrapper[5118]: I1208 17:56:29.530297 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f64tgj"] Dec 08 17:56:29 crc kubenswrapper[5118]: I1208 17:56:29.604432 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/f308c3282bd783e18badba37dad473f984d0c04be601135745fecb7682f55kx" Dec 08 17:56:29 crc kubenswrapper[5118]: W1208 17:56:29.802629 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf97402a7_57a3_4f4a_af9f_478d646d2cbc.slice/crio-3182b268e15e5a9dae6bd9d62c0e1c15361887d6f5afe589908eb4becf27d9e8 WatchSource:0}: Error finding container 3182b268e15e5a9dae6bd9d62c0e1c15361887d6f5afe589908eb4becf27d9e8: Status 404 returned error can't find the container with id 3182b268e15e5a9dae6bd9d62c0e1c15361887d6f5afe589908eb4becf27d9e8 Dec 08 17:56:30 crc kubenswrapper[5118]: I1208 17:56:30.153903 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f64tgj"] Dec 08 17:56:30 crc kubenswrapper[5118]: I1208 17:56:30.153938 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/f308c3282bd783e18badba37dad473f984d0c04be601135745fecb7682f55kx"] Dec 08 17:56:30 crc kubenswrapper[5118]: I1208 17:56:30.154068 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f64tgj" Dec 08 17:56:30 crc kubenswrapper[5118]: I1208 17:56:30.156927 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"default-dockercfg-b2ccr\"" Dec 08 17:56:30 crc kubenswrapper[5118]: I1208 17:56:30.166577 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/f308c3282bd783e18badba37dad473f984d0c04be601135745fecb7682f55kx" event={"ID":"f97402a7-57a3-4f4a-af9f-478d646d2cbc","Type":"ContainerStarted","Data":"3182b268e15e5a9dae6bd9d62c0e1c15361887d6f5afe589908eb4becf27d9e8"} Dec 08 17:56:30 crc kubenswrapper[5118]: I1208 17:56:30.168506 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/36ffb4ab4bfe83a910ab52ec1870308fea799225a9f1157962b08e8113ms9qq" event={"ID":"8dfcd1bd-ac9d-4eba-b160-b7f4335fb440","Type":"ContainerStarted","Data":"3c2d2aabec3ddf467be563ae844cae07e4cb0c8bd4763b2ce84272a0a197550d"} Dec 08 17:56:30 crc kubenswrapper[5118]: I1208 17:56:30.254289 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxd95\" (UniqueName: \"kubernetes.io/projected/c70d8b4a-afd5-4ece-bd7f-9caf1f100d65-kube-api-access-sxd95\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f64tgj\" (UID: \"c70d8b4a-afd5-4ece-bd7f-9caf1f100d65\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f64tgj" Dec 08 17:56:30 crc kubenswrapper[5118]: I1208 17:56:30.254449 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c70d8b4a-afd5-4ece-bd7f-9caf1f100d65-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f64tgj\" (UID: \"c70d8b4a-afd5-4ece-bd7f-9caf1f100d65\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f64tgj" Dec 08 17:56:30 crc kubenswrapper[5118]: I1208 17:56:30.254597 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c70d8b4a-afd5-4ece-bd7f-9caf1f100d65-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f64tgj\" (UID: \"c70d8b4a-afd5-4ece-bd7f-9caf1f100d65\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f64tgj" Dec 08 17:56:30 crc kubenswrapper[5118]: I1208 17:56:30.355853 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c70d8b4a-afd5-4ece-bd7f-9caf1f100d65-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f64tgj\" (UID: \"c70d8b4a-afd5-4ece-bd7f-9caf1f100d65\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f64tgj" Dec 08 17:56:30 crc kubenswrapper[5118]: I1208 17:56:30.356023 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c70d8b4a-afd5-4ece-bd7f-9caf1f100d65-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f64tgj\" (UID: \"c70d8b4a-afd5-4ece-bd7f-9caf1f100d65\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f64tgj" Dec 08 17:56:30 crc kubenswrapper[5118]: I1208 17:56:30.356123 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-sxd95\" (UniqueName: \"kubernetes.io/projected/c70d8b4a-afd5-4ece-bd7f-9caf1f100d65-kube-api-access-sxd95\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f64tgj\" (UID: \"c70d8b4a-afd5-4ece-bd7f-9caf1f100d65\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f64tgj" Dec 08 17:56:30 crc kubenswrapper[5118]: I1208 17:56:30.357126 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c70d8b4a-afd5-4ece-bd7f-9caf1f100d65-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f64tgj\" (UID: \"c70d8b4a-afd5-4ece-bd7f-9caf1f100d65\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f64tgj" Dec 08 17:56:30 crc kubenswrapper[5118]: I1208 17:56:30.357797 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c70d8b4a-afd5-4ece-bd7f-9caf1f100d65-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f64tgj\" (UID: \"c70d8b4a-afd5-4ece-bd7f-9caf1f100d65\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f64tgj" Dec 08 17:56:30 crc kubenswrapper[5118]: I1208 17:56:30.377647 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxd95\" (UniqueName: \"kubernetes.io/projected/c70d8b4a-afd5-4ece-bd7f-9caf1f100d65-kube-api-access-sxd95\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f64tgj\" (UID: \"c70d8b4a-afd5-4ece-bd7f-9caf1f100d65\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f64tgj" Dec 08 17:56:30 crc kubenswrapper[5118]: I1208 17:56:30.493342 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f64tgj" Dec 08 17:56:30 crc kubenswrapper[5118]: I1208 17:56:30.717040 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f64tgj"] Dec 08 17:56:30 crc kubenswrapper[5118]: W1208 17:56:30.717868 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc70d8b4a_afd5_4ece_bd7f_9caf1f100d65.slice/crio-cb663a0d10c4cf8f0556c3d9ba76fa1ca496666a39393c22677549dfa2d54c1f WatchSource:0}: Error finding container cb663a0d10c4cf8f0556c3d9ba76fa1ca496666a39393c22677549dfa2d54c1f: Status 404 returned error can't find the container with id cb663a0d10c4cf8f0556c3d9ba76fa1ca496666a39393c22677549dfa2d54c1f Dec 08 17:56:31 crc kubenswrapper[5118]: I1208 17:56:31.200572 5118 generic.go:358] "Generic (PLEG): container finished" podID="f97402a7-57a3-4f4a-af9f-478d646d2cbc" containerID="b4f8f2eceaa49eb50c4831feb67ddacd68cb76a38dd21c782141f9b5dde7d0fc" exitCode=0 Dec 08 17:56:31 crc kubenswrapper[5118]: I1208 17:56:31.200692 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/f308c3282bd783e18badba37dad473f984d0c04be601135745fecb7682f55kx" event={"ID":"f97402a7-57a3-4f4a-af9f-478d646d2cbc","Type":"ContainerDied","Data":"b4f8f2eceaa49eb50c4831feb67ddacd68cb76a38dd21c782141f9b5dde7d0fc"} Dec 08 17:56:31 crc kubenswrapper[5118]: I1208 17:56:31.204977 5118 generic.go:358] "Generic (PLEG): container finished" podID="8dfcd1bd-ac9d-4eba-b160-b7f4335fb440" containerID="4c35d8974d5888bf524dbe5eefde9e191c58b3803db6ffa3939b4ce04352185f" exitCode=0 Dec 08 17:56:31 crc kubenswrapper[5118]: I1208 17:56:31.205040 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/36ffb4ab4bfe83a910ab52ec1870308fea799225a9f1157962b08e8113ms9qq" event={"ID":"8dfcd1bd-ac9d-4eba-b160-b7f4335fb440","Type":"ContainerDied","Data":"4c35d8974d5888bf524dbe5eefde9e191c58b3803db6ffa3939b4ce04352185f"} Dec 08 17:56:31 crc kubenswrapper[5118]: I1208 17:56:31.210662 5118 generic.go:358] "Generic (PLEG): container finished" podID="c70d8b4a-afd5-4ece-bd7f-9caf1f100d65" containerID="5474ff2522113e9bd184a5c141c638525df10cc04c07db43a2e9844cd99d73c4" exitCode=0 Dec 08 17:56:31 crc kubenswrapper[5118]: I1208 17:56:31.210721 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f64tgj" event={"ID":"c70d8b4a-afd5-4ece-bd7f-9caf1f100d65","Type":"ContainerDied","Data":"5474ff2522113e9bd184a5c141c638525df10cc04c07db43a2e9844cd99d73c4"} Dec 08 17:56:31 crc kubenswrapper[5118]: I1208 17:56:31.210758 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f64tgj" event={"ID":"c70d8b4a-afd5-4ece-bd7f-9caf1f100d65","Type":"ContainerStarted","Data":"cb663a0d10c4cf8f0556c3d9ba76fa1ca496666a39393c22677549dfa2d54c1f"} Dec 08 17:56:33 crc kubenswrapper[5118]: I1208 17:56:33.239450 5118 generic.go:358] "Generic (PLEG): container finished" podID="f97402a7-57a3-4f4a-af9f-478d646d2cbc" containerID="05a0f73f3535c0d5c2a0ebe8006219a712dfb9e9efa1cdc79f315fd1e3633fee" exitCode=0 Dec 08 17:56:33 crc kubenswrapper[5118]: I1208 17:56:33.239553 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/f308c3282bd783e18badba37dad473f984d0c04be601135745fecb7682f55kx" event={"ID":"f97402a7-57a3-4f4a-af9f-478d646d2cbc","Type":"ContainerDied","Data":"05a0f73f3535c0d5c2a0ebe8006219a712dfb9e9efa1cdc79f315fd1e3633fee"} Dec 08 17:56:33 crc kubenswrapper[5118]: I1208 17:56:33.242739 5118 generic.go:358] "Generic (PLEG): container finished" podID="8dfcd1bd-ac9d-4eba-b160-b7f4335fb440" containerID="0ca132d5c8c9872037343085b432294d20a42d1355b62c8997516c468986533d" exitCode=0 Dec 08 17:56:33 crc kubenswrapper[5118]: I1208 17:56:33.242804 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/36ffb4ab4bfe83a910ab52ec1870308fea799225a9f1157962b08e8113ms9qq" event={"ID":"8dfcd1bd-ac9d-4eba-b160-b7f4335fb440","Type":"ContainerDied","Data":"0ca132d5c8c9872037343085b432294d20a42d1355b62c8997516c468986533d"} Dec 08 17:56:33 crc kubenswrapper[5118]: I1208 17:56:33.253922 5118 generic.go:358] "Generic (PLEG): container finished" podID="c70d8b4a-afd5-4ece-bd7f-9caf1f100d65" containerID="7a95459686e794ce2364c97941ce45a6a0b2f58786e7c4d2cc3db359fa6ffac5" exitCode=0 Dec 08 17:56:33 crc kubenswrapper[5118]: I1208 17:56:33.253980 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f64tgj" event={"ID":"c70d8b4a-afd5-4ece-bd7f-9caf1f100d65","Type":"ContainerDied","Data":"7a95459686e794ce2364c97941ce45a6a0b2f58786e7c4d2cc3db359fa6ffac5"} Dec 08 17:56:34 crc kubenswrapper[5118]: I1208 17:56:34.265866 5118 generic.go:358] "Generic (PLEG): container finished" podID="c70d8b4a-afd5-4ece-bd7f-9caf1f100d65" containerID="30a7f684eff5dd9681ad3fe2f569a0870845035b6dc6ddd9b3098bfd72cfa9d4" exitCode=0 Dec 08 17:56:34 crc kubenswrapper[5118]: I1208 17:56:34.265914 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f64tgj" event={"ID":"c70d8b4a-afd5-4ece-bd7f-9caf1f100d65","Type":"ContainerDied","Data":"30a7f684eff5dd9681ad3fe2f569a0870845035b6dc6ddd9b3098bfd72cfa9d4"} Dec 08 17:56:34 crc kubenswrapper[5118]: I1208 17:56:34.270739 5118 generic.go:358] "Generic (PLEG): container finished" podID="f97402a7-57a3-4f4a-af9f-478d646d2cbc" containerID="87fea37bc9cf6f903e07e23f2df1da34cc7a8ef0682d180e4755d99a1b948e15" exitCode=0 Dec 08 17:56:34 crc kubenswrapper[5118]: I1208 17:56:34.270809 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/f308c3282bd783e18badba37dad473f984d0c04be601135745fecb7682f55kx" event={"ID":"f97402a7-57a3-4f4a-af9f-478d646d2cbc","Type":"ContainerDied","Data":"87fea37bc9cf6f903e07e23f2df1da34cc7a8ef0682d180e4755d99a1b948e15"} Dec 08 17:56:34 crc kubenswrapper[5118]: I1208 17:56:34.273150 5118 generic.go:358] "Generic (PLEG): container finished" podID="8dfcd1bd-ac9d-4eba-b160-b7f4335fb440" containerID="d3e8561202cc89d83ca353a282a58fe1a93cb345811d80c4f8d79bbece0f3150" exitCode=0 Dec 08 17:56:34 crc kubenswrapper[5118]: I1208 17:56:34.273184 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/36ffb4ab4bfe83a910ab52ec1870308fea799225a9f1157962b08e8113ms9qq" event={"ID":"8dfcd1bd-ac9d-4eba-b160-b7f4335fb440","Type":"ContainerDied","Data":"d3e8561202cc89d83ca353a282a58fe1a93cb345811d80c4f8d79bbece0f3150"} Dec 08 17:56:35 crc kubenswrapper[5118]: I1208 17:56:35.515358 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f64tgj" Dec 08 17:56:35 crc kubenswrapper[5118]: I1208 17:56:35.607448 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/36ffb4ab4bfe83a910ab52ec1870308fea799225a9f1157962b08e8113ms9qq" Dec 08 17:56:35 crc kubenswrapper[5118]: I1208 17:56:35.613960 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/f308c3282bd783e18badba37dad473f984d0c04be601135745fecb7682f55kx" Dec 08 17:56:35 crc kubenswrapper[5118]: I1208 17:56:35.634184 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c70d8b4a-afd5-4ece-bd7f-9caf1f100d65-bundle\") pod \"c70d8b4a-afd5-4ece-bd7f-9caf1f100d65\" (UID: \"c70d8b4a-afd5-4ece-bd7f-9caf1f100d65\") " Dec 08 17:56:35 crc kubenswrapper[5118]: I1208 17:56:35.634337 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sxd95\" (UniqueName: \"kubernetes.io/projected/c70d8b4a-afd5-4ece-bd7f-9caf1f100d65-kube-api-access-sxd95\") pod \"c70d8b4a-afd5-4ece-bd7f-9caf1f100d65\" (UID: \"c70d8b4a-afd5-4ece-bd7f-9caf1f100d65\") " Dec 08 17:56:35 crc kubenswrapper[5118]: I1208 17:56:35.634473 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c70d8b4a-afd5-4ece-bd7f-9caf1f100d65-util\") pod \"c70d8b4a-afd5-4ece-bd7f-9caf1f100d65\" (UID: \"c70d8b4a-afd5-4ece-bd7f-9caf1f100d65\") " Dec 08 17:56:35 crc kubenswrapper[5118]: I1208 17:56:35.635699 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c70d8b4a-afd5-4ece-bd7f-9caf1f100d65-bundle" (OuterVolumeSpecName: "bundle") pod "c70d8b4a-afd5-4ece-bd7f-9caf1f100d65" (UID: "c70d8b4a-afd5-4ece-bd7f-9caf1f100d65"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:56:35 crc kubenswrapper[5118]: I1208 17:56:35.651705 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c70d8b4a-afd5-4ece-bd7f-9caf1f100d65-util" (OuterVolumeSpecName: "util") pod "c70d8b4a-afd5-4ece-bd7f-9caf1f100d65" (UID: "c70d8b4a-afd5-4ece-bd7f-9caf1f100d65"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:56:35 crc kubenswrapper[5118]: I1208 17:56:35.665124 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c70d8b4a-afd5-4ece-bd7f-9caf1f100d65-kube-api-access-sxd95" (OuterVolumeSpecName: "kube-api-access-sxd95") pod "c70d8b4a-afd5-4ece-bd7f-9caf1f100d65" (UID: "c70d8b4a-afd5-4ece-bd7f-9caf1f100d65"). InnerVolumeSpecName "kube-api-access-sxd95". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:56:35 crc kubenswrapper[5118]: I1208 17:56:35.735570 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f97402a7-57a3-4f4a-af9f-478d646d2cbc-bundle\") pod \"f97402a7-57a3-4f4a-af9f-478d646d2cbc\" (UID: \"f97402a7-57a3-4f4a-af9f-478d646d2cbc\") " Dec 08 17:56:35 crc kubenswrapper[5118]: I1208 17:56:35.735660 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8dfcd1bd-ac9d-4eba-b160-b7f4335fb440-util\") pod \"8dfcd1bd-ac9d-4eba-b160-b7f4335fb440\" (UID: \"8dfcd1bd-ac9d-4eba-b160-b7f4335fb440\") " Dec 08 17:56:35 crc kubenswrapper[5118]: I1208 17:56:35.735760 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8dfcd1bd-ac9d-4eba-b160-b7f4335fb440-bundle\") pod \"8dfcd1bd-ac9d-4eba-b160-b7f4335fb440\" (UID: \"8dfcd1bd-ac9d-4eba-b160-b7f4335fb440\") " Dec 08 17:56:35 crc kubenswrapper[5118]: I1208 17:56:35.735836 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-46hts\" (UniqueName: \"kubernetes.io/projected/f97402a7-57a3-4f4a-af9f-478d646d2cbc-kube-api-access-46hts\") pod \"f97402a7-57a3-4f4a-af9f-478d646d2cbc\" (UID: \"f97402a7-57a3-4f4a-af9f-478d646d2cbc\") " Dec 08 17:56:35 crc kubenswrapper[5118]: I1208 17:56:35.735912 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4zj26\" (UniqueName: \"kubernetes.io/projected/8dfcd1bd-ac9d-4eba-b160-b7f4335fb440-kube-api-access-4zj26\") pod \"8dfcd1bd-ac9d-4eba-b160-b7f4335fb440\" (UID: \"8dfcd1bd-ac9d-4eba-b160-b7f4335fb440\") " Dec 08 17:56:35 crc kubenswrapper[5118]: I1208 17:56:35.735937 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f97402a7-57a3-4f4a-af9f-478d646d2cbc-util\") pod \"f97402a7-57a3-4f4a-af9f-478d646d2cbc\" (UID: \"f97402a7-57a3-4f4a-af9f-478d646d2cbc\") " Dec 08 17:56:35 crc kubenswrapper[5118]: I1208 17:56:35.736129 5118 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c70d8b4a-afd5-4ece-bd7f-9caf1f100d65-util\") on node \"crc\" DevicePath \"\"" Dec 08 17:56:35 crc kubenswrapper[5118]: I1208 17:56:35.736155 5118 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c70d8b4a-afd5-4ece-bd7f-9caf1f100d65-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 17:56:35 crc kubenswrapper[5118]: I1208 17:56:35.736169 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-sxd95\" (UniqueName: \"kubernetes.io/projected/c70d8b4a-afd5-4ece-bd7f-9caf1f100d65-kube-api-access-sxd95\") on node \"crc\" DevicePath \"\"" Dec 08 17:56:35 crc kubenswrapper[5118]: I1208 17:56:35.736977 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8dfcd1bd-ac9d-4eba-b160-b7f4335fb440-bundle" (OuterVolumeSpecName: "bundle") pod "8dfcd1bd-ac9d-4eba-b160-b7f4335fb440" (UID: "8dfcd1bd-ac9d-4eba-b160-b7f4335fb440"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:56:35 crc kubenswrapper[5118]: I1208 17:56:35.737011 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f97402a7-57a3-4f4a-af9f-478d646d2cbc-bundle" (OuterVolumeSpecName: "bundle") pod "f97402a7-57a3-4f4a-af9f-478d646d2cbc" (UID: "f97402a7-57a3-4f4a-af9f-478d646d2cbc"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:56:35 crc kubenswrapper[5118]: I1208 17:56:35.739750 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f97402a7-57a3-4f4a-af9f-478d646d2cbc-kube-api-access-46hts" (OuterVolumeSpecName: "kube-api-access-46hts") pod "f97402a7-57a3-4f4a-af9f-478d646d2cbc" (UID: "f97402a7-57a3-4f4a-af9f-478d646d2cbc"). InnerVolumeSpecName "kube-api-access-46hts". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:56:35 crc kubenswrapper[5118]: I1208 17:56:35.740166 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8dfcd1bd-ac9d-4eba-b160-b7f4335fb440-kube-api-access-4zj26" (OuterVolumeSpecName: "kube-api-access-4zj26") pod "8dfcd1bd-ac9d-4eba-b160-b7f4335fb440" (UID: "8dfcd1bd-ac9d-4eba-b160-b7f4335fb440"). InnerVolumeSpecName "kube-api-access-4zj26". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:56:35 crc kubenswrapper[5118]: I1208 17:56:35.748235 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f97402a7-57a3-4f4a-af9f-478d646d2cbc-util" (OuterVolumeSpecName: "util") pod "f97402a7-57a3-4f4a-af9f-478d646d2cbc" (UID: "f97402a7-57a3-4f4a-af9f-478d646d2cbc"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:56:35 crc kubenswrapper[5118]: I1208 17:56:35.752826 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8dfcd1bd-ac9d-4eba-b160-b7f4335fb440-util" (OuterVolumeSpecName: "util") pod "8dfcd1bd-ac9d-4eba-b160-b7f4335fb440" (UID: "8dfcd1bd-ac9d-4eba-b160-b7f4335fb440"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:56:35 crc kubenswrapper[5118]: I1208 17:56:35.836991 5118 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8dfcd1bd-ac9d-4eba-b160-b7f4335fb440-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 17:56:35 crc kubenswrapper[5118]: I1208 17:56:35.837023 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-46hts\" (UniqueName: \"kubernetes.io/projected/f97402a7-57a3-4f4a-af9f-478d646d2cbc-kube-api-access-46hts\") on node \"crc\" DevicePath \"\"" Dec 08 17:56:35 crc kubenswrapper[5118]: I1208 17:56:35.837034 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4zj26\" (UniqueName: \"kubernetes.io/projected/8dfcd1bd-ac9d-4eba-b160-b7f4335fb440-kube-api-access-4zj26\") on node \"crc\" DevicePath \"\"" Dec 08 17:56:35 crc kubenswrapper[5118]: I1208 17:56:35.837042 5118 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f97402a7-57a3-4f4a-af9f-478d646d2cbc-util\") on node \"crc\" DevicePath \"\"" Dec 08 17:56:35 crc kubenswrapper[5118]: I1208 17:56:35.837049 5118 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f97402a7-57a3-4f4a-af9f-478d646d2cbc-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 17:56:35 crc kubenswrapper[5118]: I1208 17:56:35.837061 5118 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8dfcd1bd-ac9d-4eba-b160-b7f4335fb440-util\") on node \"crc\" DevicePath \"\"" Dec 08 17:56:36 crc kubenswrapper[5118]: I1208 17:56:36.295011 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/f308c3282bd783e18badba37dad473f984d0c04be601135745fecb7682f55kx" event={"ID":"f97402a7-57a3-4f4a-af9f-478d646d2cbc","Type":"ContainerDied","Data":"3182b268e15e5a9dae6bd9d62c0e1c15361887d6f5afe589908eb4becf27d9e8"} Dec 08 17:56:36 crc kubenswrapper[5118]: I1208 17:56:36.295066 5118 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3182b268e15e5a9dae6bd9d62c0e1c15361887d6f5afe589908eb4becf27d9e8" Dec 08 17:56:36 crc kubenswrapper[5118]: I1208 17:56:36.295105 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/f308c3282bd783e18badba37dad473f984d0c04be601135745fecb7682f55kx" Dec 08 17:56:36 crc kubenswrapper[5118]: I1208 17:56:36.298644 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/36ffb4ab4bfe83a910ab52ec1870308fea799225a9f1157962b08e8113ms9qq" Dec 08 17:56:36 crc kubenswrapper[5118]: I1208 17:56:36.298757 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/36ffb4ab4bfe83a910ab52ec1870308fea799225a9f1157962b08e8113ms9qq" event={"ID":"8dfcd1bd-ac9d-4eba-b160-b7f4335fb440","Type":"ContainerDied","Data":"3c2d2aabec3ddf467be563ae844cae07e4cb0c8bd4763b2ce84272a0a197550d"} Dec 08 17:56:36 crc kubenswrapper[5118]: I1208 17:56:36.299290 5118 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3c2d2aabec3ddf467be563ae844cae07e4cb0c8bd4763b2ce84272a0a197550d" Dec 08 17:56:36 crc kubenswrapper[5118]: I1208 17:56:36.307146 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f64tgj" event={"ID":"c70d8b4a-afd5-4ece-bd7f-9caf1f100d65","Type":"ContainerDied","Data":"cb663a0d10c4cf8f0556c3d9ba76fa1ca496666a39393c22677549dfa2d54c1f"} Dec 08 17:56:36 crc kubenswrapper[5118]: I1208 17:56:36.307240 5118 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cb663a0d10c4cf8f0556c3d9ba76fa1ca496666a39393c22677549dfa2d54c1f" Dec 08 17:56:36 crc kubenswrapper[5118]: I1208 17:56:36.307462 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f64tgj" Dec 08 17:56:39 crc kubenswrapper[5118]: I1208 17:56:39.598897 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/interconnect-operator-78b9bd8798-456sz"] Dec 08 17:56:39 crc kubenswrapper[5118]: I1208 17:56:39.599948 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c70d8b4a-afd5-4ece-bd7f-9caf1f100d65" containerName="extract" Dec 08 17:56:39 crc kubenswrapper[5118]: I1208 17:56:39.599963 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="c70d8b4a-afd5-4ece-bd7f-9caf1f100d65" containerName="extract" Dec 08 17:56:39 crc kubenswrapper[5118]: I1208 17:56:39.599977 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f97402a7-57a3-4f4a-af9f-478d646d2cbc" containerName="pull" Dec 08 17:56:39 crc kubenswrapper[5118]: I1208 17:56:39.599984 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="f97402a7-57a3-4f4a-af9f-478d646d2cbc" containerName="pull" Dec 08 17:56:39 crc kubenswrapper[5118]: I1208 17:56:39.599994 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8dfcd1bd-ac9d-4eba-b160-b7f4335fb440" containerName="pull" Dec 08 17:56:39 crc kubenswrapper[5118]: I1208 17:56:39.600001 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="8dfcd1bd-ac9d-4eba-b160-b7f4335fb440" containerName="pull" Dec 08 17:56:39 crc kubenswrapper[5118]: I1208 17:56:39.600013 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c70d8b4a-afd5-4ece-bd7f-9caf1f100d65" containerName="pull" Dec 08 17:56:39 crc kubenswrapper[5118]: I1208 17:56:39.600019 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="c70d8b4a-afd5-4ece-bd7f-9caf1f100d65" containerName="pull" Dec 08 17:56:39 crc kubenswrapper[5118]: I1208 17:56:39.600031 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c70d8b4a-afd5-4ece-bd7f-9caf1f100d65" containerName="util" Dec 08 17:56:39 crc kubenswrapper[5118]: I1208 17:56:39.600040 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="c70d8b4a-afd5-4ece-bd7f-9caf1f100d65" containerName="util" Dec 08 17:56:39 crc kubenswrapper[5118]: I1208 17:56:39.600055 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8dfcd1bd-ac9d-4eba-b160-b7f4335fb440" containerName="extract" Dec 08 17:56:39 crc kubenswrapper[5118]: I1208 17:56:39.600061 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="8dfcd1bd-ac9d-4eba-b160-b7f4335fb440" containerName="extract" Dec 08 17:56:39 crc kubenswrapper[5118]: I1208 17:56:39.600079 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f97402a7-57a3-4f4a-af9f-478d646d2cbc" containerName="extract" Dec 08 17:56:39 crc kubenswrapper[5118]: I1208 17:56:39.600086 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="f97402a7-57a3-4f4a-af9f-478d646d2cbc" containerName="extract" Dec 08 17:56:39 crc kubenswrapper[5118]: I1208 17:56:39.600099 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8dfcd1bd-ac9d-4eba-b160-b7f4335fb440" containerName="util" Dec 08 17:56:39 crc kubenswrapper[5118]: I1208 17:56:39.600106 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="8dfcd1bd-ac9d-4eba-b160-b7f4335fb440" containerName="util" Dec 08 17:56:39 crc kubenswrapper[5118]: I1208 17:56:39.600122 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f97402a7-57a3-4f4a-af9f-478d646d2cbc" containerName="util" Dec 08 17:56:39 crc kubenswrapper[5118]: I1208 17:56:39.600128 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="f97402a7-57a3-4f4a-af9f-478d646d2cbc" containerName="util" Dec 08 17:56:39 crc kubenswrapper[5118]: I1208 17:56:39.600248 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="8dfcd1bd-ac9d-4eba-b160-b7f4335fb440" containerName="extract" Dec 08 17:56:39 crc kubenswrapper[5118]: I1208 17:56:39.600264 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="c70d8b4a-afd5-4ece-bd7f-9caf1f100d65" containerName="extract" Dec 08 17:56:39 crc kubenswrapper[5118]: I1208 17:56:39.600280 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="f97402a7-57a3-4f4a-af9f-478d646d2cbc" containerName="extract" Dec 08 17:56:39 crc kubenswrapper[5118]: I1208 17:56:39.608996 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/interconnect-operator-78b9bd8798-456sz"] Dec 08 17:56:39 crc kubenswrapper[5118]: I1208 17:56:39.609093 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/interconnect-operator-78b9bd8798-456sz" Dec 08 17:56:39 crc kubenswrapper[5118]: I1208 17:56:39.610775 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"interconnect-operator-dockercfg-xvhmb\"" Dec 08 17:56:39 crc kubenswrapper[5118]: I1208 17:56:39.683660 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9wvz\" (UniqueName: \"kubernetes.io/projected/871b0dde-aad5-4e54-bd14-1c4bc8779b60-kube-api-access-l9wvz\") pod \"interconnect-operator-78b9bd8798-456sz\" (UID: \"871b0dde-aad5-4e54-bd14-1c4bc8779b60\") " pod="service-telemetry/interconnect-operator-78b9bd8798-456sz" Dec 08 17:56:39 crc kubenswrapper[5118]: I1208 17:56:39.784526 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l9wvz\" (UniqueName: \"kubernetes.io/projected/871b0dde-aad5-4e54-bd14-1c4bc8779b60-kube-api-access-l9wvz\") pod \"interconnect-operator-78b9bd8798-456sz\" (UID: \"871b0dde-aad5-4e54-bd14-1c4bc8779b60\") " pod="service-telemetry/interconnect-operator-78b9bd8798-456sz" Dec 08 17:56:39 crc kubenswrapper[5118]: I1208 17:56:39.803902 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9wvz\" (UniqueName: \"kubernetes.io/projected/871b0dde-aad5-4e54-bd14-1c4bc8779b60-kube-api-access-l9wvz\") pod \"interconnect-operator-78b9bd8798-456sz\" (UID: \"871b0dde-aad5-4e54-bd14-1c4bc8779b60\") " pod="service-telemetry/interconnect-operator-78b9bd8798-456sz" Dec 08 17:56:39 crc kubenswrapper[5118]: I1208 17:56:39.938136 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/interconnect-operator-78b9bd8798-456sz" Dec 08 17:56:40 crc kubenswrapper[5118]: I1208 17:56:40.171974 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/interconnect-operator-78b9bd8798-456sz"] Dec 08 17:56:40 crc kubenswrapper[5118]: I1208 17:56:40.332564 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/interconnect-operator-78b9bd8798-456sz" event={"ID":"871b0dde-aad5-4e54-bd14-1c4bc8779b60","Type":"ContainerStarted","Data":"1d8201820a175b8c1c05383eb05125d6444707e713bbea7f072e21acc90251cf"} Dec 08 17:56:41 crc kubenswrapper[5118]: I1208 17:56:41.051949 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-79647f8775-zs8hl"] Dec 08 17:56:41 crc kubenswrapper[5118]: I1208 17:56:41.057507 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-79647f8775-zs8hl" Dec 08 17:56:41 crc kubenswrapper[5118]: I1208 17:56:41.059332 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-dockercfg-tqm5c\"" Dec 08 17:56:41 crc kubenswrapper[5118]: I1208 17:56:41.066956 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-79647f8775-zs8hl"] Dec 08 17:56:41 crc kubenswrapper[5118]: I1208 17:56:41.099515 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xj895\" (UniqueName: \"kubernetes.io/projected/b4cd1da4-b555-42d4-b09a-38f141ee7dc4-kube-api-access-xj895\") pod \"service-telemetry-operator-79647f8775-zs8hl\" (UID: \"b4cd1da4-b555-42d4-b09a-38f141ee7dc4\") " pod="service-telemetry/service-telemetry-operator-79647f8775-zs8hl" Dec 08 17:56:41 crc kubenswrapper[5118]: I1208 17:56:41.099651 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/b4cd1da4-b555-42d4-b09a-38f141ee7dc4-runner\") pod \"service-telemetry-operator-79647f8775-zs8hl\" (UID: \"b4cd1da4-b555-42d4-b09a-38f141ee7dc4\") " pod="service-telemetry/service-telemetry-operator-79647f8775-zs8hl" Dec 08 17:56:41 crc kubenswrapper[5118]: I1208 17:56:41.201332 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/b4cd1da4-b555-42d4-b09a-38f141ee7dc4-runner\") pod \"service-telemetry-operator-79647f8775-zs8hl\" (UID: \"b4cd1da4-b555-42d4-b09a-38f141ee7dc4\") " pod="service-telemetry/service-telemetry-operator-79647f8775-zs8hl" Dec 08 17:56:41 crc kubenswrapper[5118]: I1208 17:56:41.201390 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xj895\" (UniqueName: \"kubernetes.io/projected/b4cd1da4-b555-42d4-b09a-38f141ee7dc4-kube-api-access-xj895\") pod \"service-telemetry-operator-79647f8775-zs8hl\" (UID: \"b4cd1da4-b555-42d4-b09a-38f141ee7dc4\") " pod="service-telemetry/service-telemetry-operator-79647f8775-zs8hl" Dec 08 17:56:41 crc kubenswrapper[5118]: I1208 17:56:41.202134 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/b4cd1da4-b555-42d4-b09a-38f141ee7dc4-runner\") pod \"service-telemetry-operator-79647f8775-zs8hl\" (UID: \"b4cd1da4-b555-42d4-b09a-38f141ee7dc4\") " pod="service-telemetry/service-telemetry-operator-79647f8775-zs8hl" Dec 08 17:56:41 crc kubenswrapper[5118]: I1208 17:56:41.227836 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xj895\" (UniqueName: \"kubernetes.io/projected/b4cd1da4-b555-42d4-b09a-38f141ee7dc4-kube-api-access-xj895\") pod \"service-telemetry-operator-79647f8775-zs8hl\" (UID: \"b4cd1da4-b555-42d4-b09a-38f141ee7dc4\") " pod="service-telemetry/service-telemetry-operator-79647f8775-zs8hl" Dec 08 17:56:41 crc kubenswrapper[5118]: I1208 17:56:41.382287 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-79647f8775-zs8hl" Dec 08 17:56:41 crc kubenswrapper[5118]: I1208 17:56:41.795677 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-79647f8775-zs8hl"] Dec 08 17:56:41 crc kubenswrapper[5118]: W1208 17:56:41.803263 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb4cd1da4_b555_42d4_b09a_38f141ee7dc4.slice/crio-cc64fdcf023ae6ec287ba1b3bf0cef5081db5760d0ee172cbb13cf476e1bf387 WatchSource:0}: Error finding container cc64fdcf023ae6ec287ba1b3bf0cef5081db5760d0ee172cbb13cf476e1bf387: Status 404 returned error can't find the container with id cc64fdcf023ae6ec287ba1b3bf0cef5081db5760d0ee172cbb13cf476e1bf387 Dec 08 17:56:42 crc kubenswrapper[5118]: I1208 17:56:42.331898 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/smart-gateway-operator-5cd794ff55-w8r45"] Dec 08 17:56:42 crc kubenswrapper[5118]: I1208 17:56:42.336587 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-5cd794ff55-w8r45" Dec 08 17:56:42 crc kubenswrapper[5118]: I1208 17:56:42.339962 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"smart-gateway-operator-dockercfg-7jw7l\"" Dec 08 17:56:42 crc kubenswrapper[5118]: I1208 17:56:42.343154 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-5cd794ff55-w8r45"] Dec 08 17:56:42 crc kubenswrapper[5118]: I1208 17:56:42.373343 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-79647f8775-zs8hl" event={"ID":"b4cd1da4-b555-42d4-b09a-38f141ee7dc4","Type":"ContainerStarted","Data":"cc64fdcf023ae6ec287ba1b3bf0cef5081db5760d0ee172cbb13cf476e1bf387"} Dec 08 17:56:42 crc kubenswrapper[5118]: I1208 17:56:42.426486 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttxjn\" (UniqueName: \"kubernetes.io/projected/88186169-23e9-44fb-a70c-0f6fe06b2800-kube-api-access-ttxjn\") pod \"smart-gateway-operator-5cd794ff55-w8r45\" (UID: \"88186169-23e9-44fb-a70c-0f6fe06b2800\") " pod="service-telemetry/smart-gateway-operator-5cd794ff55-w8r45" Dec 08 17:56:42 crc kubenswrapper[5118]: I1208 17:56:42.426602 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/88186169-23e9-44fb-a70c-0f6fe06b2800-runner\") pod \"smart-gateway-operator-5cd794ff55-w8r45\" (UID: \"88186169-23e9-44fb-a70c-0f6fe06b2800\") " pod="service-telemetry/smart-gateway-operator-5cd794ff55-w8r45" Dec 08 17:56:42 crc kubenswrapper[5118]: I1208 17:56:42.527481 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ttxjn\" (UniqueName: \"kubernetes.io/projected/88186169-23e9-44fb-a70c-0f6fe06b2800-kube-api-access-ttxjn\") pod \"smart-gateway-operator-5cd794ff55-w8r45\" (UID: \"88186169-23e9-44fb-a70c-0f6fe06b2800\") " pod="service-telemetry/smart-gateway-operator-5cd794ff55-w8r45" Dec 08 17:56:42 crc kubenswrapper[5118]: I1208 17:56:42.527546 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/88186169-23e9-44fb-a70c-0f6fe06b2800-runner\") pod \"smart-gateway-operator-5cd794ff55-w8r45\" (UID: \"88186169-23e9-44fb-a70c-0f6fe06b2800\") " pod="service-telemetry/smart-gateway-operator-5cd794ff55-w8r45" Dec 08 17:56:42 crc kubenswrapper[5118]: I1208 17:56:42.527950 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/88186169-23e9-44fb-a70c-0f6fe06b2800-runner\") pod \"smart-gateway-operator-5cd794ff55-w8r45\" (UID: \"88186169-23e9-44fb-a70c-0f6fe06b2800\") " pod="service-telemetry/smart-gateway-operator-5cd794ff55-w8r45" Dec 08 17:56:42 crc kubenswrapper[5118]: I1208 17:56:42.562527 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttxjn\" (UniqueName: \"kubernetes.io/projected/88186169-23e9-44fb-a70c-0f6fe06b2800-kube-api-access-ttxjn\") pod \"smart-gateway-operator-5cd794ff55-w8r45\" (UID: \"88186169-23e9-44fb-a70c-0f6fe06b2800\") " pod="service-telemetry/smart-gateway-operator-5cd794ff55-w8r45" Dec 08 17:56:42 crc kubenswrapper[5118]: I1208 17:56:42.703175 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-5cd794ff55-w8r45" Dec 08 17:56:43 crc kubenswrapper[5118]: I1208 17:56:43.278271 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-5cd794ff55-w8r45"] Dec 08 17:56:43 crc kubenswrapper[5118]: I1208 17:56:43.383185 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-5cd794ff55-w8r45" event={"ID":"88186169-23e9-44fb-a70c-0f6fe06b2800","Type":"ContainerStarted","Data":"935e5adaa947f7579267581015d5cf9e7812bcc51aa5127e2dbb772b426d393f"} Dec 08 17:57:01 crc kubenswrapper[5118]: I1208 17:57:01.962224 5118 patch_prober.go:28] interesting pod/machine-config-daemon-8vxnt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 17:57:01 crc kubenswrapper[5118]: I1208 17:57:01.962780 5118 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8vxnt" podUID="cee6a3dc-47d4-4996-9c78-cb6c6b626d71" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 17:57:06 crc kubenswrapper[5118]: I1208 17:57:06.563391 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/interconnect-operator-78b9bd8798-456sz" event={"ID":"871b0dde-aad5-4e54-bd14-1c4bc8779b60","Type":"ContainerStarted","Data":"78c7a6304373b9d8bbf8ce68e516149981c19313797eb0d7d1947051ee876f85"} Dec 08 17:57:06 crc kubenswrapper[5118]: I1208 17:57:06.566129 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-79647f8775-zs8hl" event={"ID":"b4cd1da4-b555-42d4-b09a-38f141ee7dc4","Type":"ContainerStarted","Data":"3a21cd1b2f501351e58b3781f2bed15afd0ea184615e7ddb1e5be0235b1a7775"} Dec 08 17:57:06 crc kubenswrapper[5118]: I1208 17:57:06.582211 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/interconnect-operator-78b9bd8798-456sz" podStartSLOduration=7.719862979 podStartE2EDuration="27.582191848s" podCreationTimestamp="2025-12-08 17:56:39 +0000 UTC" firstStartedPulling="2025-12-08 17:56:40.175783766 +0000 UTC m=+857.077107860" lastFinishedPulling="2025-12-08 17:57:00.038112635 +0000 UTC m=+876.939436729" observedRunningTime="2025-12-08 17:57:06.580092283 +0000 UTC m=+883.481416417" watchObservedRunningTime="2025-12-08 17:57:06.582191848 +0000 UTC m=+883.483515932" Dec 08 17:57:06 crc kubenswrapper[5118]: I1208 17:57:06.597650 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/service-telemetry-operator-79647f8775-zs8hl" podStartSLOduration=1.10294946 podStartE2EDuration="25.59763053s" podCreationTimestamp="2025-12-08 17:56:41 +0000 UTC" firstStartedPulling="2025-12-08 17:56:41.80541675 +0000 UTC m=+858.706740844" lastFinishedPulling="2025-12-08 17:57:06.30009783 +0000 UTC m=+883.201421914" observedRunningTime="2025-12-08 17:57:06.593204635 +0000 UTC m=+883.494528739" watchObservedRunningTime="2025-12-08 17:57:06.59763053 +0000 UTC m=+883.498954624" Dec 08 17:57:07 crc kubenswrapper[5118]: I1208 17:57:07.575699 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-5cd794ff55-w8r45" event={"ID":"88186169-23e9-44fb-a70c-0f6fe06b2800","Type":"ContainerStarted","Data":"018ca33c45bb426da7c629b89f77b64ff95ca602ae0cf4b3e1436dbf94487999"} Dec 08 17:57:07 crc kubenswrapper[5118]: I1208 17:57:07.601215 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/smart-gateway-operator-5cd794ff55-w8r45" podStartSLOduration=2.454660429 podStartE2EDuration="25.601187705s" podCreationTimestamp="2025-12-08 17:56:42 +0000 UTC" firstStartedPulling="2025-12-08 17:56:43.289432047 +0000 UTC m=+860.190756141" lastFinishedPulling="2025-12-08 17:57:06.435959323 +0000 UTC m=+883.337283417" observedRunningTime="2025-12-08 17:57:07.593597207 +0000 UTC m=+884.494921321" watchObservedRunningTime="2025-12-08 17:57:07.601187705 +0000 UTC m=+884.502511819" Dec 08 17:57:23 crc kubenswrapper[5118]: I1208 17:57:23.725295 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-dlvbf_a091751f-234c-43ee-8324-ebb98bb3ec36/kube-multus/0.log" Dec 08 17:57:23 crc kubenswrapper[5118]: I1208 17:57:23.735239 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-dlvbf_a091751f-234c-43ee-8324-ebb98bb3ec36/kube-multus/0.log" Dec 08 17:57:23 crc kubenswrapper[5118]: I1208 17:57:23.737117 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Dec 08 17:57:23 crc kubenswrapper[5118]: I1208 17:57:23.740900 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Dec 08 17:57:28 crc kubenswrapper[5118]: I1208 17:57:28.556133 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-76n5w"] Dec 08 17:57:28 crc kubenswrapper[5118]: I1208 17:57:28.574037 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-76n5w" Dec 08 17:57:28 crc kubenswrapper[5118]: I1208 17:57:28.576421 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-inter-router-credentials\"" Dec 08 17:57:28 crc kubenswrapper[5118]: I1208 17:57:28.577137 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-openstack-ca\"" Dec 08 17:57:28 crc kubenswrapper[5118]: I1208 17:57:28.577177 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-interconnect-sasl-config\"" Dec 08 17:57:28 crc kubenswrapper[5118]: I1208 17:57:28.577268 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-openstack-credentials\"" Dec 08 17:57:28 crc kubenswrapper[5118]: I1208 17:57:28.577154 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-inter-router-ca\"" Dec 08 17:57:28 crc kubenswrapper[5118]: I1208 17:57:28.577417 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-dockercfg-nxt7g\"" Dec 08 17:57:28 crc kubenswrapper[5118]: I1208 17:57:28.577522 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-users\"" Dec 08 17:57:28 crc kubenswrapper[5118]: I1208 17:57:28.579024 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-76n5w"] Dec 08 17:57:28 crc kubenswrapper[5118]: I1208 17:57:28.759563 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/df9f5211-ab02-49a8-82e6-0c2f4b07bc52-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-76n5w\" (UID: \"df9f5211-ab02-49a8-82e6-0c2f4b07bc52\") " pod="service-telemetry/default-interconnect-55bf8d5cb-76n5w" Dec 08 17:57:28 crc kubenswrapper[5118]: I1208 17:57:28.759626 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/df9f5211-ab02-49a8-82e6-0c2f4b07bc52-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-76n5w\" (UID: \"df9f5211-ab02-49a8-82e6-0c2f4b07bc52\") " pod="service-telemetry/default-interconnect-55bf8d5cb-76n5w" Dec 08 17:57:28 crc kubenswrapper[5118]: I1208 17:57:28.759743 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/df9f5211-ab02-49a8-82e6-0c2f4b07bc52-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-76n5w\" (UID: \"df9f5211-ab02-49a8-82e6-0c2f4b07bc52\") " pod="service-telemetry/default-interconnect-55bf8d5cb-76n5w" Dec 08 17:57:28 crc kubenswrapper[5118]: I1208 17:57:28.759776 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/df9f5211-ab02-49a8-82e6-0c2f4b07bc52-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-76n5w\" (UID: \"df9f5211-ab02-49a8-82e6-0c2f4b07bc52\") " pod="service-telemetry/default-interconnect-55bf8d5cb-76n5w" Dec 08 17:57:28 crc kubenswrapper[5118]: I1208 17:57:28.759941 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhpm6\" (UniqueName: \"kubernetes.io/projected/df9f5211-ab02-49a8-82e6-0c2f4b07bc52-kube-api-access-rhpm6\") pod \"default-interconnect-55bf8d5cb-76n5w\" (UID: \"df9f5211-ab02-49a8-82e6-0c2f4b07bc52\") " pod="service-telemetry/default-interconnect-55bf8d5cb-76n5w" Dec 08 17:57:28 crc kubenswrapper[5118]: I1208 17:57:28.760040 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/df9f5211-ab02-49a8-82e6-0c2f4b07bc52-sasl-users\") pod \"default-interconnect-55bf8d5cb-76n5w\" (UID: \"df9f5211-ab02-49a8-82e6-0c2f4b07bc52\") " pod="service-telemetry/default-interconnect-55bf8d5cb-76n5w" Dec 08 17:57:28 crc kubenswrapper[5118]: I1208 17:57:28.760105 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/df9f5211-ab02-49a8-82e6-0c2f4b07bc52-sasl-config\") pod \"default-interconnect-55bf8d5cb-76n5w\" (UID: \"df9f5211-ab02-49a8-82e6-0c2f4b07bc52\") " pod="service-telemetry/default-interconnect-55bf8d5cb-76n5w" Dec 08 17:57:28 crc kubenswrapper[5118]: I1208 17:57:28.862110 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/df9f5211-ab02-49a8-82e6-0c2f4b07bc52-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-76n5w\" (UID: \"df9f5211-ab02-49a8-82e6-0c2f4b07bc52\") " pod="service-telemetry/default-interconnect-55bf8d5cb-76n5w" Dec 08 17:57:28 crc kubenswrapper[5118]: I1208 17:57:28.862198 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rhpm6\" (UniqueName: \"kubernetes.io/projected/df9f5211-ab02-49a8-82e6-0c2f4b07bc52-kube-api-access-rhpm6\") pod \"default-interconnect-55bf8d5cb-76n5w\" (UID: \"df9f5211-ab02-49a8-82e6-0c2f4b07bc52\") " pod="service-telemetry/default-interconnect-55bf8d5cb-76n5w" Dec 08 17:57:28 crc kubenswrapper[5118]: I1208 17:57:28.862249 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/df9f5211-ab02-49a8-82e6-0c2f4b07bc52-sasl-users\") pod \"default-interconnect-55bf8d5cb-76n5w\" (UID: \"df9f5211-ab02-49a8-82e6-0c2f4b07bc52\") " pod="service-telemetry/default-interconnect-55bf8d5cb-76n5w" Dec 08 17:57:28 crc kubenswrapper[5118]: I1208 17:57:28.862292 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/df9f5211-ab02-49a8-82e6-0c2f4b07bc52-sasl-config\") pod \"default-interconnect-55bf8d5cb-76n5w\" (UID: \"df9f5211-ab02-49a8-82e6-0c2f4b07bc52\") " pod="service-telemetry/default-interconnect-55bf8d5cb-76n5w" Dec 08 17:57:28 crc kubenswrapper[5118]: I1208 17:57:28.862435 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/df9f5211-ab02-49a8-82e6-0c2f4b07bc52-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-76n5w\" (UID: \"df9f5211-ab02-49a8-82e6-0c2f4b07bc52\") " pod="service-telemetry/default-interconnect-55bf8d5cb-76n5w" Dec 08 17:57:28 crc kubenswrapper[5118]: I1208 17:57:28.862473 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/df9f5211-ab02-49a8-82e6-0c2f4b07bc52-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-76n5w\" (UID: \"df9f5211-ab02-49a8-82e6-0c2f4b07bc52\") " pod="service-telemetry/default-interconnect-55bf8d5cb-76n5w" Dec 08 17:57:28 crc kubenswrapper[5118]: I1208 17:57:28.862571 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/df9f5211-ab02-49a8-82e6-0c2f4b07bc52-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-76n5w\" (UID: \"df9f5211-ab02-49a8-82e6-0c2f4b07bc52\") " pod="service-telemetry/default-interconnect-55bf8d5cb-76n5w" Dec 08 17:57:28 crc kubenswrapper[5118]: I1208 17:57:28.864302 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/df9f5211-ab02-49a8-82e6-0c2f4b07bc52-sasl-config\") pod \"default-interconnect-55bf8d5cb-76n5w\" (UID: \"df9f5211-ab02-49a8-82e6-0c2f4b07bc52\") " pod="service-telemetry/default-interconnect-55bf8d5cb-76n5w" Dec 08 17:57:28 crc kubenswrapper[5118]: I1208 17:57:28.869643 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/df9f5211-ab02-49a8-82e6-0c2f4b07bc52-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-76n5w\" (UID: \"df9f5211-ab02-49a8-82e6-0c2f4b07bc52\") " pod="service-telemetry/default-interconnect-55bf8d5cb-76n5w" Dec 08 17:57:28 crc kubenswrapper[5118]: I1208 17:57:28.870157 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/df9f5211-ab02-49a8-82e6-0c2f4b07bc52-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-76n5w\" (UID: \"df9f5211-ab02-49a8-82e6-0c2f4b07bc52\") " pod="service-telemetry/default-interconnect-55bf8d5cb-76n5w" Dec 08 17:57:28 crc kubenswrapper[5118]: I1208 17:57:28.870683 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/df9f5211-ab02-49a8-82e6-0c2f4b07bc52-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-76n5w\" (UID: \"df9f5211-ab02-49a8-82e6-0c2f4b07bc52\") " pod="service-telemetry/default-interconnect-55bf8d5cb-76n5w" Dec 08 17:57:28 crc kubenswrapper[5118]: I1208 17:57:28.870828 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/df9f5211-ab02-49a8-82e6-0c2f4b07bc52-sasl-users\") pod \"default-interconnect-55bf8d5cb-76n5w\" (UID: \"df9f5211-ab02-49a8-82e6-0c2f4b07bc52\") " pod="service-telemetry/default-interconnect-55bf8d5cb-76n5w" Dec 08 17:57:28 crc kubenswrapper[5118]: I1208 17:57:28.871219 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/df9f5211-ab02-49a8-82e6-0c2f4b07bc52-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-76n5w\" (UID: \"df9f5211-ab02-49a8-82e6-0c2f4b07bc52\") " pod="service-telemetry/default-interconnect-55bf8d5cb-76n5w" Dec 08 17:57:28 crc kubenswrapper[5118]: I1208 17:57:28.886681 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rhpm6\" (UniqueName: \"kubernetes.io/projected/df9f5211-ab02-49a8-82e6-0c2f4b07bc52-kube-api-access-rhpm6\") pod \"default-interconnect-55bf8d5cb-76n5w\" (UID: \"df9f5211-ab02-49a8-82e6-0c2f4b07bc52\") " pod="service-telemetry/default-interconnect-55bf8d5cb-76n5w" Dec 08 17:57:28 crc kubenswrapper[5118]: I1208 17:57:28.892605 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-76n5w" Dec 08 17:57:29 crc kubenswrapper[5118]: I1208 17:57:29.354480 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-76n5w"] Dec 08 17:57:29 crc kubenswrapper[5118]: I1208 17:57:29.735478 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-76n5w" event={"ID":"df9f5211-ab02-49a8-82e6-0c2f4b07bc52","Type":"ContainerStarted","Data":"b9eccbab184d45ec8b09d562fad481be8c01d5ebbe8dc86e74b7741c2826ecb8"} Dec 08 17:57:31 crc kubenswrapper[5118]: I1208 17:57:31.962721 5118 patch_prober.go:28] interesting pod/machine-config-daemon-8vxnt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 17:57:31 crc kubenswrapper[5118]: I1208 17:57:31.963090 5118 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8vxnt" podUID="cee6a3dc-47d4-4996-9c78-cb6c6b626d71" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 17:57:34 crc kubenswrapper[5118]: I1208 17:57:34.773636 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-76n5w" event={"ID":"df9f5211-ab02-49a8-82e6-0c2f4b07bc52","Type":"ContainerStarted","Data":"23ef45f8f74a4f33cee49aff44fdf03128bfa93ad9ea0b31d1316072eb33d353"} Dec 08 17:57:34 crc kubenswrapper[5118]: I1208 17:57:34.796343 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-interconnect-55bf8d5cb-76n5w" podStartSLOduration=2.301816769 podStartE2EDuration="6.79632334s" podCreationTimestamp="2025-12-08 17:57:28 +0000 UTC" firstStartedPulling="2025-12-08 17:57:29.362079002 +0000 UTC m=+906.263403096" lastFinishedPulling="2025-12-08 17:57:33.856585563 +0000 UTC m=+910.757909667" observedRunningTime="2025-12-08 17:57:34.794909394 +0000 UTC m=+911.696233498" watchObservedRunningTime="2025-12-08 17:57:34.79632334 +0000 UTC m=+911.697647434" Dec 08 17:57:38 crc kubenswrapper[5118]: I1208 17:57:38.586923 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/prometheus-default-0"] Dec 08 17:57:38 crc kubenswrapper[5118]: I1208 17:57:38.604304 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-default-0" Dec 08 17:57:38 crc kubenswrapper[5118]: I1208 17:57:38.608082 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"prometheus-default-rulefiles-0\"" Dec 08 17:57:38 crc kubenswrapper[5118]: I1208 17:57:38.608267 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-default-0"] Dec 08 17:57:38 crc kubenswrapper[5118]: I1208 17:57:38.608376 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-session-secret\"" Dec 08 17:57:38 crc kubenswrapper[5118]: I1208 17:57:38.608500 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"prometheus-stf-dockercfg-p6qm4\"" Dec 08 17:57:38 crc kubenswrapper[5118]: I1208 17:57:38.608533 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"prometheus-default-web-config\"" Dec 08 17:57:38 crc kubenswrapper[5118]: I1208 17:57:38.608655 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-prometheus-proxy-tls\"" Dec 08 17:57:38 crc kubenswrapper[5118]: I1208 17:57:38.608834 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"prometheus-default-tls-assets-0\"" Dec 08 17:57:38 crc kubenswrapper[5118]: I1208 17:57:38.610064 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"prometheus-default\"" Dec 08 17:57:38 crc kubenswrapper[5118]: I1208 17:57:38.610149 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"serving-certs-ca-bundle\"" Dec 08 17:57:38 crc kubenswrapper[5118]: I1208 17:57:38.741930 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3d62a6f6-b57c-48e0-9279-d8dadd01a921-config\") pod \"prometheus-default-0\" (UID: \"3d62a6f6-b57c-48e0-9279-d8dadd01a921\") " pod="service-telemetry/prometheus-default-0" Dec 08 17:57:38 crc kubenswrapper[5118]: I1208 17:57:38.742034 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-e87a7084-eea8-46c1-a85e-77b652e25ad6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e87a7084-eea8-46c1-a85e-77b652e25ad6\") pod \"prometheus-default-0\" (UID: \"3d62a6f6-b57c-48e0-9279-d8dadd01a921\") " pod="service-telemetry/prometheus-default-0" Dec 08 17:57:38 crc kubenswrapper[5118]: I1208 17:57:38.742091 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3d62a6f6-b57c-48e0-9279-d8dadd01a921-configmap-serving-certs-ca-bundle\") pod \"prometheus-default-0\" (UID: \"3d62a6f6-b57c-48e0-9279-d8dadd01a921\") " pod="service-telemetry/prometheus-default-0" Dec 08 17:57:38 crc kubenswrapper[5118]: I1208 17:57:38.742129 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-default-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/3d62a6f6-b57c-48e0-9279-d8dadd01a921-prometheus-default-rulefiles-0\") pod \"prometheus-default-0\" (UID: \"3d62a6f6-b57c-48e0-9279-d8dadd01a921\") " pod="service-telemetry/prometheus-default-0" Dec 08 17:57:38 crc kubenswrapper[5118]: I1208 17:57:38.742181 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/3d62a6f6-b57c-48e0-9279-d8dadd01a921-secret-default-session-secret\") pod \"prometheus-default-0\" (UID: \"3d62a6f6-b57c-48e0-9279-d8dadd01a921\") " pod="service-telemetry/prometheus-default-0" Dec 08 17:57:38 crc kubenswrapper[5118]: I1208 17:57:38.742222 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/3d62a6f6-b57c-48e0-9279-d8dadd01a921-tls-assets\") pod \"prometheus-default-0\" (UID: \"3d62a6f6-b57c-48e0-9279-d8dadd01a921\") " pod="service-telemetry/prometheus-default-0" Dec 08 17:57:38 crc kubenswrapper[5118]: I1208 17:57:38.742271 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/3d62a6f6-b57c-48e0-9279-d8dadd01a921-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"3d62a6f6-b57c-48e0-9279-d8dadd01a921\") " pod="service-telemetry/prometheus-default-0" Dec 08 17:57:38 crc kubenswrapper[5118]: I1208 17:57:38.742297 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plr5j\" (UniqueName: \"kubernetes.io/projected/3d62a6f6-b57c-48e0-9279-d8dadd01a921-kube-api-access-plr5j\") pod \"prometheus-default-0\" (UID: \"3d62a6f6-b57c-48e0-9279-d8dadd01a921\") " pod="service-telemetry/prometheus-default-0" Dec 08 17:57:38 crc kubenswrapper[5118]: I1208 17:57:38.742327 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/3d62a6f6-b57c-48e0-9279-d8dadd01a921-web-config\") pod \"prometheus-default-0\" (UID: \"3d62a6f6-b57c-48e0-9279-d8dadd01a921\") " pod="service-telemetry/prometheus-default-0" Dec 08 17:57:38 crc kubenswrapper[5118]: I1208 17:57:38.742390 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/3d62a6f6-b57c-48e0-9279-d8dadd01a921-config-out\") pod \"prometheus-default-0\" (UID: \"3d62a6f6-b57c-48e0-9279-d8dadd01a921\") " pod="service-telemetry/prometheus-default-0" Dec 08 17:57:38 crc kubenswrapper[5118]: I1208 17:57:38.845300 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-e87a7084-eea8-46c1-a85e-77b652e25ad6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e87a7084-eea8-46c1-a85e-77b652e25ad6\") pod \"prometheus-default-0\" (UID: \"3d62a6f6-b57c-48e0-9279-d8dadd01a921\") " pod="service-telemetry/prometheus-default-0" Dec 08 17:57:38 crc kubenswrapper[5118]: I1208 17:57:38.845583 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3d62a6f6-b57c-48e0-9279-d8dadd01a921-configmap-serving-certs-ca-bundle\") pod \"prometheus-default-0\" (UID: \"3d62a6f6-b57c-48e0-9279-d8dadd01a921\") " pod="service-telemetry/prometheus-default-0" Dec 08 17:57:38 crc kubenswrapper[5118]: I1208 17:57:38.845610 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"prometheus-default-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/3d62a6f6-b57c-48e0-9279-d8dadd01a921-prometheus-default-rulefiles-0\") pod \"prometheus-default-0\" (UID: \"3d62a6f6-b57c-48e0-9279-d8dadd01a921\") " pod="service-telemetry/prometheus-default-0" Dec 08 17:57:38 crc kubenswrapper[5118]: I1208 17:57:38.845653 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/3d62a6f6-b57c-48e0-9279-d8dadd01a921-secret-default-session-secret\") pod \"prometheus-default-0\" (UID: \"3d62a6f6-b57c-48e0-9279-d8dadd01a921\") " pod="service-telemetry/prometheus-default-0" Dec 08 17:57:38 crc kubenswrapper[5118]: I1208 17:57:38.845686 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/3d62a6f6-b57c-48e0-9279-d8dadd01a921-tls-assets\") pod \"prometheus-default-0\" (UID: \"3d62a6f6-b57c-48e0-9279-d8dadd01a921\") " pod="service-telemetry/prometheus-default-0" Dec 08 17:57:38 crc kubenswrapper[5118]: I1208 17:57:38.845752 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/3d62a6f6-b57c-48e0-9279-d8dadd01a921-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"3d62a6f6-b57c-48e0-9279-d8dadd01a921\") " pod="service-telemetry/prometheus-default-0" Dec 08 17:57:38 crc kubenswrapper[5118]: I1208 17:57:38.845778 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-plr5j\" (UniqueName: \"kubernetes.io/projected/3d62a6f6-b57c-48e0-9279-d8dadd01a921-kube-api-access-plr5j\") pod \"prometheus-default-0\" (UID: \"3d62a6f6-b57c-48e0-9279-d8dadd01a921\") " pod="service-telemetry/prometheus-default-0" Dec 08 17:57:38 crc kubenswrapper[5118]: I1208 17:57:38.845809 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/3d62a6f6-b57c-48e0-9279-d8dadd01a921-web-config\") pod \"prometheus-default-0\" (UID: \"3d62a6f6-b57c-48e0-9279-d8dadd01a921\") " pod="service-telemetry/prometheus-default-0" Dec 08 17:57:38 crc kubenswrapper[5118]: I1208 17:57:38.845837 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/3d62a6f6-b57c-48e0-9279-d8dadd01a921-config-out\") pod \"prometheus-default-0\" (UID: \"3d62a6f6-b57c-48e0-9279-d8dadd01a921\") " pod="service-telemetry/prometheus-default-0" Dec 08 17:57:38 crc kubenswrapper[5118]: I1208 17:57:38.845961 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3d62a6f6-b57c-48e0-9279-d8dadd01a921-config\") pod \"prometheus-default-0\" (UID: \"3d62a6f6-b57c-48e0-9279-d8dadd01a921\") " pod="service-telemetry/prometheus-default-0" Dec 08 17:57:38 crc kubenswrapper[5118]: I1208 17:57:38.846667 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"prometheus-default-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/3d62a6f6-b57c-48e0-9279-d8dadd01a921-prometheus-default-rulefiles-0\") pod \"prometheus-default-0\" (UID: \"3d62a6f6-b57c-48e0-9279-d8dadd01a921\") " pod="service-telemetry/prometheus-default-0" Dec 08 17:57:38 crc kubenswrapper[5118]: E1208 17:57:38.846762 5118 secret.go:189] Couldn't get secret service-telemetry/default-prometheus-proxy-tls: secret "default-prometheus-proxy-tls" not found Dec 08 17:57:38 crc kubenswrapper[5118]: E1208 17:57:38.846810 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3d62a6f6-b57c-48e0-9279-d8dadd01a921-secret-default-prometheus-proxy-tls podName:3d62a6f6-b57c-48e0-9279-d8dadd01a921 nodeName:}" failed. No retries permitted until 2025-12-08 17:57:39.346796618 +0000 UTC m=+916.248120712 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-default-prometheus-proxy-tls" (UniqueName: "kubernetes.io/secret/3d62a6f6-b57c-48e0-9279-d8dadd01a921-secret-default-prometheus-proxy-tls") pod "prometheus-default-0" (UID: "3d62a6f6-b57c-48e0-9279-d8dadd01a921") : secret "default-prometheus-proxy-tls" not found Dec 08 17:57:38 crc kubenswrapper[5118]: I1208 17:57:38.847014 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3d62a6f6-b57c-48e0-9279-d8dadd01a921-configmap-serving-certs-ca-bundle\") pod \"prometheus-default-0\" (UID: \"3d62a6f6-b57c-48e0-9279-d8dadd01a921\") " pod="service-telemetry/prometheus-default-0" Dec 08 17:57:38 crc kubenswrapper[5118]: I1208 17:57:38.851272 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/3d62a6f6-b57c-48e0-9279-d8dadd01a921-config-out\") pod \"prometheus-default-0\" (UID: \"3d62a6f6-b57c-48e0-9279-d8dadd01a921\") " pod="service-telemetry/prometheus-default-0" Dec 08 17:57:38 crc kubenswrapper[5118]: I1208 17:57:38.851629 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/3d62a6f6-b57c-48e0-9279-d8dadd01a921-secret-default-session-secret\") pod \"prometheus-default-0\" (UID: \"3d62a6f6-b57c-48e0-9279-d8dadd01a921\") " pod="service-telemetry/prometheus-default-0" Dec 08 17:57:38 crc kubenswrapper[5118]: I1208 17:57:38.852123 5118 csi_attacher.go:373] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 08 17:57:38 crc kubenswrapper[5118]: I1208 17:57:38.852234 5118 operation_generator.go:557] "MountVolume.MountDevice succeeded for volume \"pvc-e87a7084-eea8-46c1-a85e-77b652e25ad6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e87a7084-eea8-46c1-a85e-77b652e25ad6\") pod \"prometheus-default-0\" (UID: \"3d62a6f6-b57c-48e0-9279-d8dadd01a921\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/227c7b0be901c0be53a6f1bd48ffed98cff4d354bc631c0ea763997f65103264/globalmount\"" pod="service-telemetry/prometheus-default-0" Dec 08 17:57:38 crc kubenswrapper[5118]: I1208 17:57:38.852204 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/3d62a6f6-b57c-48e0-9279-d8dadd01a921-tls-assets\") pod \"prometheus-default-0\" (UID: \"3d62a6f6-b57c-48e0-9279-d8dadd01a921\") " pod="service-telemetry/prometheus-default-0" Dec 08 17:57:38 crc kubenswrapper[5118]: I1208 17:57:38.853217 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/3d62a6f6-b57c-48e0-9279-d8dadd01a921-web-config\") pod \"prometheus-default-0\" (UID: \"3d62a6f6-b57c-48e0-9279-d8dadd01a921\") " pod="service-telemetry/prometheus-default-0" Dec 08 17:57:38 crc kubenswrapper[5118]: I1208 17:57:38.854226 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/3d62a6f6-b57c-48e0-9279-d8dadd01a921-config\") pod \"prometheus-default-0\" (UID: \"3d62a6f6-b57c-48e0-9279-d8dadd01a921\") " pod="service-telemetry/prometheus-default-0" Dec 08 17:57:38 crc kubenswrapper[5118]: I1208 17:57:38.864648 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-plr5j\" (UniqueName: \"kubernetes.io/projected/3d62a6f6-b57c-48e0-9279-d8dadd01a921-kube-api-access-plr5j\") pod \"prometheus-default-0\" (UID: \"3d62a6f6-b57c-48e0-9279-d8dadd01a921\") " pod="service-telemetry/prometheus-default-0" Dec 08 17:57:38 crc kubenswrapper[5118]: I1208 17:57:38.873177 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-e87a7084-eea8-46c1-a85e-77b652e25ad6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e87a7084-eea8-46c1-a85e-77b652e25ad6\") pod \"prometheus-default-0\" (UID: \"3d62a6f6-b57c-48e0-9279-d8dadd01a921\") " pod="service-telemetry/prometheus-default-0" Dec 08 17:57:39 crc kubenswrapper[5118]: I1208 17:57:39.353379 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/3d62a6f6-b57c-48e0-9279-d8dadd01a921-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"3d62a6f6-b57c-48e0-9279-d8dadd01a921\") " pod="service-telemetry/prometheus-default-0" Dec 08 17:57:39 crc kubenswrapper[5118]: E1208 17:57:39.353566 5118 secret.go:189] Couldn't get secret service-telemetry/default-prometheus-proxy-tls: secret "default-prometheus-proxy-tls" not found Dec 08 17:57:39 crc kubenswrapper[5118]: E1208 17:57:39.354409 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3d62a6f6-b57c-48e0-9279-d8dadd01a921-secret-default-prometheus-proxy-tls podName:3d62a6f6-b57c-48e0-9279-d8dadd01a921 nodeName:}" failed. No retries permitted until 2025-12-08 17:57:40.354373166 +0000 UTC m=+917.255697300 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-default-prometheus-proxy-tls" (UniqueName: "kubernetes.io/secret/3d62a6f6-b57c-48e0-9279-d8dadd01a921-secret-default-prometheus-proxy-tls") pod "prometheus-default-0" (UID: "3d62a6f6-b57c-48e0-9279-d8dadd01a921") : secret "default-prometheus-proxy-tls" not found Dec 08 17:57:40 crc kubenswrapper[5118]: I1208 17:57:40.368832 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/3d62a6f6-b57c-48e0-9279-d8dadd01a921-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"3d62a6f6-b57c-48e0-9279-d8dadd01a921\") " pod="service-telemetry/prometheus-default-0" Dec 08 17:57:40 crc kubenswrapper[5118]: I1208 17:57:40.372972 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/3d62a6f6-b57c-48e0-9279-d8dadd01a921-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"3d62a6f6-b57c-48e0-9279-d8dadd01a921\") " pod="service-telemetry/prometheus-default-0" Dec 08 17:57:40 crc kubenswrapper[5118]: I1208 17:57:40.430376 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-default-0" Dec 08 17:57:40 crc kubenswrapper[5118]: I1208 17:57:40.660489 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-default-0"] Dec 08 17:57:40 crc kubenswrapper[5118]: I1208 17:57:40.815617 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"3d62a6f6-b57c-48e0-9279-d8dadd01a921","Type":"ContainerStarted","Data":"36ca181347274fbe8b29a8090ba78e0141864b9aeeccac5eab1d78b763bfc7c0"} Dec 08 17:57:44 crc kubenswrapper[5118]: I1208 17:57:44.868977 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"3d62a6f6-b57c-48e0-9279-d8dadd01a921","Type":"ContainerStarted","Data":"e6c7573c2cef367549fd3f598e39a14ffec6661e883c00be8596d9edc848c423"} Dec 08 17:57:48 crc kubenswrapper[5118]: I1208 17:57:48.450229 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-snmp-webhook-6774d8dfbc-75fxn"] Dec 08 17:57:48 crc kubenswrapper[5118]: I1208 17:57:48.475535 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-snmp-webhook-6774d8dfbc-75fxn"] Dec 08 17:57:48 crc kubenswrapper[5118]: I1208 17:57:48.475696 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-snmp-webhook-6774d8dfbc-75fxn" Dec 08 17:57:48 crc kubenswrapper[5118]: I1208 17:57:48.593062 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l59v2\" (UniqueName: \"kubernetes.io/projected/37bee34a-f42e-4493-85f3-7f5e5cbd7301-kube-api-access-l59v2\") pod \"default-snmp-webhook-6774d8dfbc-75fxn\" (UID: \"37bee34a-f42e-4493-85f3-7f5e5cbd7301\") " pod="service-telemetry/default-snmp-webhook-6774d8dfbc-75fxn" Dec 08 17:57:48 crc kubenswrapper[5118]: I1208 17:57:48.694714 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l59v2\" (UniqueName: \"kubernetes.io/projected/37bee34a-f42e-4493-85f3-7f5e5cbd7301-kube-api-access-l59v2\") pod \"default-snmp-webhook-6774d8dfbc-75fxn\" (UID: \"37bee34a-f42e-4493-85f3-7f5e5cbd7301\") " pod="service-telemetry/default-snmp-webhook-6774d8dfbc-75fxn" Dec 08 17:57:48 crc kubenswrapper[5118]: I1208 17:57:48.717738 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-l59v2\" (UniqueName: \"kubernetes.io/projected/37bee34a-f42e-4493-85f3-7f5e5cbd7301-kube-api-access-l59v2\") pod \"default-snmp-webhook-6774d8dfbc-75fxn\" (UID: \"37bee34a-f42e-4493-85f3-7f5e5cbd7301\") " pod="service-telemetry/default-snmp-webhook-6774d8dfbc-75fxn" Dec 08 17:57:48 crc kubenswrapper[5118]: I1208 17:57:48.794858 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-snmp-webhook-6774d8dfbc-75fxn" Dec 08 17:57:49 crc kubenswrapper[5118]: I1208 17:57:49.137975 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-snmp-webhook-6774d8dfbc-75fxn"] Dec 08 17:57:49 crc kubenswrapper[5118]: W1208 17:57:49.156969 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod37bee34a_f42e_4493_85f3_7f5e5cbd7301.slice/crio-f8e9688711f76b654202dc32616384db9c588343391f2380d5559ee3a510781f WatchSource:0}: Error finding container f8e9688711f76b654202dc32616384db9c588343391f2380d5559ee3a510781f: Status 404 returned error can't find the container with id f8e9688711f76b654202dc32616384db9c588343391f2380d5559ee3a510781f Dec 08 17:57:49 crc kubenswrapper[5118]: I1208 17:57:49.912427 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-snmp-webhook-6774d8dfbc-75fxn" event={"ID":"37bee34a-f42e-4493-85f3-7f5e5cbd7301","Type":"ContainerStarted","Data":"f8e9688711f76b654202dc32616384db9c588343391f2380d5559ee3a510781f"} Dec 08 17:57:51 crc kubenswrapper[5118]: I1208 17:57:51.928093 5118 generic.go:358] "Generic (PLEG): container finished" podID="3d62a6f6-b57c-48e0-9279-d8dadd01a921" containerID="e6c7573c2cef367549fd3f598e39a14ffec6661e883c00be8596d9edc848c423" exitCode=0 Dec 08 17:57:51 crc kubenswrapper[5118]: I1208 17:57:51.928211 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"3d62a6f6-b57c-48e0-9279-d8dadd01a921","Type":"ContainerDied","Data":"e6c7573c2cef367549fd3f598e39a14ffec6661e883c00be8596d9edc848c423"} Dec 08 17:57:52 crc kubenswrapper[5118]: I1208 17:57:52.351171 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/alertmanager-default-0"] Dec 08 17:57:52 crc kubenswrapper[5118]: I1208 17:57:52.363811 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/alertmanager-default-0" Dec 08 17:57:52 crc kubenswrapper[5118]: I1208 17:57:52.365179 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/alertmanager-default-0"] Dec 08 17:57:52 crc kubenswrapper[5118]: I1208 17:57:52.366365 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"alertmanager-default-tls-assets-0\"" Dec 08 17:57:52 crc kubenswrapper[5118]: I1208 17:57:52.366437 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"alertmanager-default-generated\"" Dec 08 17:57:52 crc kubenswrapper[5118]: I1208 17:57:52.366376 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"alertmanager-default-cluster-tls-config\"" Dec 08 17:57:52 crc kubenswrapper[5118]: I1208 17:57:52.367245 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-alertmanager-proxy-tls\"" Dec 08 17:57:52 crc kubenswrapper[5118]: I1208 17:57:52.367495 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"alertmanager-stf-dockercfg-5zfwx\"" Dec 08 17:57:52 crc kubenswrapper[5118]: I1208 17:57:52.370739 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"alertmanager-default-web-config\"" Dec 08 17:57:52 crc kubenswrapper[5118]: I1208 17:57:52.559211 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwmm9\" (UniqueName: \"kubernetes.io/projected/81e17e77-b0f9-4df6-8c85-e06d1fd7a46a-kube-api-access-vwmm9\") pod \"alertmanager-default-0\" (UID: \"81e17e77-b0f9-4df6-8c85-e06d1fd7a46a\") " pod="service-telemetry/alertmanager-default-0" Dec 08 17:57:52 crc kubenswrapper[5118]: I1208 17:57:52.559273 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/81e17e77-b0f9-4df6-8c85-e06d1fd7a46a-web-config\") pod \"alertmanager-default-0\" (UID: \"81e17e77-b0f9-4df6-8c85-e06d1fd7a46a\") " pod="service-telemetry/alertmanager-default-0" Dec 08 17:57:52 crc kubenswrapper[5118]: I1208 17:57:52.559361 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/81e17e77-b0f9-4df6-8c85-e06d1fd7a46a-secret-default-session-secret\") pod \"alertmanager-default-0\" (UID: \"81e17e77-b0f9-4df6-8c85-e06d1fd7a46a\") " pod="service-telemetry/alertmanager-default-0" Dec 08 17:57:52 crc kubenswrapper[5118]: I1208 17:57:52.559418 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/81e17e77-b0f9-4df6-8c85-e06d1fd7a46a-tls-assets\") pod \"alertmanager-default-0\" (UID: \"81e17e77-b0f9-4df6-8c85-e06d1fd7a46a\") " pod="service-telemetry/alertmanager-default-0" Dec 08 17:57:52 crc kubenswrapper[5118]: I1208 17:57:52.559501 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/81e17e77-b0f9-4df6-8c85-e06d1fd7a46a-config-out\") pod \"alertmanager-default-0\" (UID: \"81e17e77-b0f9-4df6-8c85-e06d1fd7a46a\") " pod="service-telemetry/alertmanager-default-0" Dec 08 17:57:52 crc kubenswrapper[5118]: I1208 17:57:52.559682 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/81e17e77-b0f9-4df6-8c85-e06d1fd7a46a-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"81e17e77-b0f9-4df6-8c85-e06d1fd7a46a\") " pod="service-telemetry/alertmanager-default-0" Dec 08 17:57:52 crc kubenswrapper[5118]: I1208 17:57:52.559807 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-1b15df9e-01ca-4097-a731-1c1b05c63480\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1b15df9e-01ca-4097-a731-1c1b05c63480\") pod \"alertmanager-default-0\" (UID: \"81e17e77-b0f9-4df6-8c85-e06d1fd7a46a\") " pod="service-telemetry/alertmanager-default-0" Dec 08 17:57:52 crc kubenswrapper[5118]: I1208 17:57:52.560110 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/81e17e77-b0f9-4df6-8c85-e06d1fd7a46a-config-volume\") pod \"alertmanager-default-0\" (UID: \"81e17e77-b0f9-4df6-8c85-e06d1fd7a46a\") " pod="service-telemetry/alertmanager-default-0" Dec 08 17:57:52 crc kubenswrapper[5118]: I1208 17:57:52.560261 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/81e17e77-b0f9-4df6-8c85-e06d1fd7a46a-cluster-tls-config\") pod \"alertmanager-default-0\" (UID: \"81e17e77-b0f9-4df6-8c85-e06d1fd7a46a\") " pod="service-telemetry/alertmanager-default-0" Dec 08 17:57:52 crc kubenswrapper[5118]: I1208 17:57:52.661262 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-1b15df9e-01ca-4097-a731-1c1b05c63480\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1b15df9e-01ca-4097-a731-1c1b05c63480\") pod \"alertmanager-default-0\" (UID: \"81e17e77-b0f9-4df6-8c85-e06d1fd7a46a\") " pod="service-telemetry/alertmanager-default-0" Dec 08 17:57:52 crc kubenswrapper[5118]: I1208 17:57:52.661347 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/81e17e77-b0f9-4df6-8c85-e06d1fd7a46a-config-volume\") pod \"alertmanager-default-0\" (UID: \"81e17e77-b0f9-4df6-8c85-e06d1fd7a46a\") " pod="service-telemetry/alertmanager-default-0" Dec 08 17:57:52 crc kubenswrapper[5118]: I1208 17:57:52.661396 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/81e17e77-b0f9-4df6-8c85-e06d1fd7a46a-cluster-tls-config\") pod \"alertmanager-default-0\" (UID: \"81e17e77-b0f9-4df6-8c85-e06d1fd7a46a\") " pod="service-telemetry/alertmanager-default-0" Dec 08 17:57:52 crc kubenswrapper[5118]: I1208 17:57:52.661432 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vwmm9\" (UniqueName: \"kubernetes.io/projected/81e17e77-b0f9-4df6-8c85-e06d1fd7a46a-kube-api-access-vwmm9\") pod \"alertmanager-default-0\" (UID: \"81e17e77-b0f9-4df6-8c85-e06d1fd7a46a\") " pod="service-telemetry/alertmanager-default-0" Dec 08 17:57:52 crc kubenswrapper[5118]: I1208 17:57:52.661453 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/81e17e77-b0f9-4df6-8c85-e06d1fd7a46a-web-config\") pod \"alertmanager-default-0\" (UID: \"81e17e77-b0f9-4df6-8c85-e06d1fd7a46a\") " pod="service-telemetry/alertmanager-default-0" Dec 08 17:57:52 crc kubenswrapper[5118]: I1208 17:57:52.661477 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/81e17e77-b0f9-4df6-8c85-e06d1fd7a46a-secret-default-session-secret\") pod \"alertmanager-default-0\" (UID: \"81e17e77-b0f9-4df6-8c85-e06d1fd7a46a\") " pod="service-telemetry/alertmanager-default-0" Dec 08 17:57:52 crc kubenswrapper[5118]: I1208 17:57:52.661496 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/81e17e77-b0f9-4df6-8c85-e06d1fd7a46a-tls-assets\") pod \"alertmanager-default-0\" (UID: \"81e17e77-b0f9-4df6-8c85-e06d1fd7a46a\") " pod="service-telemetry/alertmanager-default-0" Dec 08 17:57:52 crc kubenswrapper[5118]: I1208 17:57:52.661544 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/81e17e77-b0f9-4df6-8c85-e06d1fd7a46a-config-out\") pod \"alertmanager-default-0\" (UID: \"81e17e77-b0f9-4df6-8c85-e06d1fd7a46a\") " pod="service-telemetry/alertmanager-default-0" Dec 08 17:57:52 crc kubenswrapper[5118]: I1208 17:57:52.661571 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/81e17e77-b0f9-4df6-8c85-e06d1fd7a46a-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"81e17e77-b0f9-4df6-8c85-e06d1fd7a46a\") " pod="service-telemetry/alertmanager-default-0" Dec 08 17:57:52 crc kubenswrapper[5118]: E1208 17:57:52.661719 5118 secret.go:189] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Dec 08 17:57:52 crc kubenswrapper[5118]: E1208 17:57:52.661787 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/81e17e77-b0f9-4df6-8c85-e06d1fd7a46a-secret-default-alertmanager-proxy-tls podName:81e17e77-b0f9-4df6-8c85-e06d1fd7a46a nodeName:}" failed. No retries permitted until 2025-12-08 17:57:53.161767824 +0000 UTC m=+930.063091928 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/81e17e77-b0f9-4df6-8c85-e06d1fd7a46a-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "81e17e77-b0f9-4df6-8c85-e06d1fd7a46a") : secret "default-alertmanager-proxy-tls" not found Dec 08 17:57:52 crc kubenswrapper[5118]: I1208 17:57:52.670679 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/81e17e77-b0f9-4df6-8c85-e06d1fd7a46a-config-out\") pod \"alertmanager-default-0\" (UID: \"81e17e77-b0f9-4df6-8c85-e06d1fd7a46a\") " pod="service-telemetry/alertmanager-default-0" Dec 08 17:57:52 crc kubenswrapper[5118]: I1208 17:57:52.670814 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/81e17e77-b0f9-4df6-8c85-e06d1fd7a46a-cluster-tls-config\") pod \"alertmanager-default-0\" (UID: \"81e17e77-b0f9-4df6-8c85-e06d1fd7a46a\") " pod="service-telemetry/alertmanager-default-0" Dec 08 17:57:52 crc kubenswrapper[5118]: I1208 17:57:52.671591 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/81e17e77-b0f9-4df6-8c85-e06d1fd7a46a-tls-assets\") pod \"alertmanager-default-0\" (UID: \"81e17e77-b0f9-4df6-8c85-e06d1fd7a46a\") " pod="service-telemetry/alertmanager-default-0" Dec 08 17:57:52 crc kubenswrapper[5118]: I1208 17:57:52.677425 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/81e17e77-b0f9-4df6-8c85-e06d1fd7a46a-web-config\") pod \"alertmanager-default-0\" (UID: \"81e17e77-b0f9-4df6-8c85-e06d1fd7a46a\") " pod="service-telemetry/alertmanager-default-0" Dec 08 17:57:52 crc kubenswrapper[5118]: I1208 17:57:52.677973 5118 csi_attacher.go:373] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 08 17:57:52 crc kubenswrapper[5118]: I1208 17:57:52.678015 5118 operation_generator.go:557] "MountVolume.MountDevice succeeded for volume \"pvc-1b15df9e-01ca-4097-a731-1c1b05c63480\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1b15df9e-01ca-4097-a731-1c1b05c63480\") pod \"alertmanager-default-0\" (UID: \"81e17e77-b0f9-4df6-8c85-e06d1fd7a46a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/393e7c7c26b6dad40c4d1f8168ac008511936ba5e851dde05f6d35b419ee9b4e/globalmount\"" pod="service-telemetry/alertmanager-default-0" Dec 08 17:57:52 crc kubenswrapper[5118]: I1208 17:57:52.680900 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/81e17e77-b0f9-4df6-8c85-e06d1fd7a46a-config-volume\") pod \"alertmanager-default-0\" (UID: \"81e17e77-b0f9-4df6-8c85-e06d1fd7a46a\") " pod="service-telemetry/alertmanager-default-0" Dec 08 17:57:52 crc kubenswrapper[5118]: I1208 17:57:52.681273 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwmm9\" (UniqueName: \"kubernetes.io/projected/81e17e77-b0f9-4df6-8c85-e06d1fd7a46a-kube-api-access-vwmm9\") pod \"alertmanager-default-0\" (UID: \"81e17e77-b0f9-4df6-8c85-e06d1fd7a46a\") " pod="service-telemetry/alertmanager-default-0" Dec 08 17:57:52 crc kubenswrapper[5118]: I1208 17:57:52.700633 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/81e17e77-b0f9-4df6-8c85-e06d1fd7a46a-secret-default-session-secret\") pod \"alertmanager-default-0\" (UID: \"81e17e77-b0f9-4df6-8c85-e06d1fd7a46a\") " pod="service-telemetry/alertmanager-default-0" Dec 08 17:57:52 crc kubenswrapper[5118]: I1208 17:57:52.709834 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-1b15df9e-01ca-4097-a731-1c1b05c63480\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1b15df9e-01ca-4097-a731-1c1b05c63480\") pod \"alertmanager-default-0\" (UID: \"81e17e77-b0f9-4df6-8c85-e06d1fd7a46a\") " pod="service-telemetry/alertmanager-default-0" Dec 08 17:57:53 crc kubenswrapper[5118]: I1208 17:57:53.169916 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/81e17e77-b0f9-4df6-8c85-e06d1fd7a46a-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"81e17e77-b0f9-4df6-8c85-e06d1fd7a46a\") " pod="service-telemetry/alertmanager-default-0" Dec 08 17:57:53 crc kubenswrapper[5118]: E1208 17:57:53.170064 5118 secret.go:189] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Dec 08 17:57:53 crc kubenswrapper[5118]: E1208 17:57:53.170324 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/81e17e77-b0f9-4df6-8c85-e06d1fd7a46a-secret-default-alertmanager-proxy-tls podName:81e17e77-b0f9-4df6-8c85-e06d1fd7a46a nodeName:}" failed. No retries permitted until 2025-12-08 17:57:54.170305048 +0000 UTC m=+931.071629142 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/81e17e77-b0f9-4df6-8c85-e06d1fd7a46a-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "81e17e77-b0f9-4df6-8c85-e06d1fd7a46a") : secret "default-alertmanager-proxy-tls" not found Dec 08 17:57:54 crc kubenswrapper[5118]: I1208 17:57:54.215042 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/81e17e77-b0f9-4df6-8c85-e06d1fd7a46a-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"81e17e77-b0f9-4df6-8c85-e06d1fd7a46a\") " pod="service-telemetry/alertmanager-default-0" Dec 08 17:57:54 crc kubenswrapper[5118]: E1208 17:57:54.215258 5118 secret.go:189] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Dec 08 17:57:54 crc kubenswrapper[5118]: E1208 17:57:54.215355 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/81e17e77-b0f9-4df6-8c85-e06d1fd7a46a-secret-default-alertmanager-proxy-tls podName:81e17e77-b0f9-4df6-8c85-e06d1fd7a46a nodeName:}" failed. No retries permitted until 2025-12-08 17:57:56.215329817 +0000 UTC m=+933.116653921 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/81e17e77-b0f9-4df6-8c85-e06d1fd7a46a-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "81e17e77-b0f9-4df6-8c85-e06d1fd7a46a") : secret "default-alertmanager-proxy-tls" not found Dec 08 17:57:56 crc kubenswrapper[5118]: I1208 17:57:56.243484 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/81e17e77-b0f9-4df6-8c85-e06d1fd7a46a-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"81e17e77-b0f9-4df6-8c85-e06d1fd7a46a\") " pod="service-telemetry/alertmanager-default-0" Dec 08 17:57:56 crc kubenswrapper[5118]: I1208 17:57:56.264005 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/81e17e77-b0f9-4df6-8c85-e06d1fd7a46a-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"81e17e77-b0f9-4df6-8c85-e06d1fd7a46a\") " pod="service-telemetry/alertmanager-default-0" Dec 08 17:57:56 crc kubenswrapper[5118]: I1208 17:57:56.287355 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/alertmanager-default-0" Dec 08 17:57:57 crc kubenswrapper[5118]: I1208 17:57:57.995470 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/alertmanager-default-0"] Dec 08 17:57:58 crc kubenswrapper[5118]: I1208 17:57:58.983388 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-snmp-webhook-6774d8dfbc-75fxn" event={"ID":"37bee34a-f42e-4493-85f3-7f5e5cbd7301","Type":"ContainerStarted","Data":"fc7a9b49acda648c324702fc105887c467f79d40143309a6e1da85462c175c0c"} Dec 08 17:57:58 crc kubenswrapper[5118]: I1208 17:57:58.990286 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"81e17e77-b0f9-4df6-8c85-e06d1fd7a46a","Type":"ContainerStarted","Data":"725078b150652ee37c6f8be09b83faf913154086093a95789f27cecb17163254"} Dec 08 17:57:59 crc kubenswrapper[5118]: I1208 17:57:59.998031 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"81e17e77-b0f9-4df6-8c85-e06d1fd7a46a","Type":"ContainerStarted","Data":"e1ca3709454dde0f975cd625db200e6158016352dea0eb93866c2ae48719f8b2"} Dec 08 17:58:00 crc kubenswrapper[5118]: I1208 17:58:00.040401 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-snmp-webhook-6774d8dfbc-75fxn" podStartSLOduration=3.233456858 podStartE2EDuration="12.040382909s" podCreationTimestamp="2025-12-08 17:57:48 +0000 UTC" firstStartedPulling="2025-12-08 17:57:49.158955572 +0000 UTC m=+926.060279686" lastFinishedPulling="2025-12-08 17:57:57.965871173 +0000 UTC m=+934.867205737" observedRunningTime="2025-12-08 17:57:59.008809518 +0000 UTC m=+935.910133632" watchObservedRunningTime="2025-12-08 17:58:00.040382909 +0000 UTC m=+936.941707003" Dec 08 17:58:01 crc kubenswrapper[5118]: I1208 17:58:01.962808 5118 patch_prober.go:28] interesting pod/machine-config-daemon-8vxnt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 17:58:01 crc kubenswrapper[5118]: I1208 17:58:01.963567 5118 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8vxnt" podUID="cee6a3dc-47d4-4996-9c78-cb6c6b626d71" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 17:58:01 crc kubenswrapper[5118]: I1208 17:58:01.963654 5118 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8vxnt" Dec 08 17:58:01 crc kubenswrapper[5118]: I1208 17:58:01.964516 5118 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ba4ed75eee971f8ac62b8cc0f3802e18dd9cabd36e3862daae0b5ce56bd2f691"} pod="openshift-machine-config-operator/machine-config-daemon-8vxnt" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 08 17:58:01 crc kubenswrapper[5118]: I1208 17:58:01.964607 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8vxnt" podUID="cee6a3dc-47d4-4996-9c78-cb6c6b626d71" containerName="machine-config-daemon" containerID="cri-o://ba4ed75eee971f8ac62b8cc0f3802e18dd9cabd36e3862daae0b5ce56bd2f691" gracePeriod=600 Dec 08 17:58:02 crc kubenswrapper[5118]: I1208 17:58:02.013993 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"3d62a6f6-b57c-48e0-9279-d8dadd01a921","Type":"ContainerStarted","Data":"e92a60bad24bc4c1c9b1c7ed70cfe2f487db1e9b5df9925a0d905ec0c54aa1f6"} Dec 08 17:58:03 crc kubenswrapper[5118]: I1208 17:58:03.023832 5118 generic.go:358] "Generic (PLEG): container finished" podID="cee6a3dc-47d4-4996-9c78-cb6c6b626d71" containerID="ba4ed75eee971f8ac62b8cc0f3802e18dd9cabd36e3862daae0b5ce56bd2f691" exitCode=0 Dec 08 17:58:03 crc kubenswrapper[5118]: I1208 17:58:03.024311 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8vxnt" event={"ID":"cee6a3dc-47d4-4996-9c78-cb6c6b626d71","Type":"ContainerDied","Data":"ba4ed75eee971f8ac62b8cc0f3802e18dd9cabd36e3862daae0b5ce56bd2f691"} Dec 08 17:58:03 crc kubenswrapper[5118]: I1208 17:58:03.024338 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8vxnt" event={"ID":"cee6a3dc-47d4-4996-9c78-cb6c6b626d71","Type":"ContainerStarted","Data":"22c5cc7c9c4c3cf08ced7571f97d2349fe0ba35b6b6efc4a95ae6a4960c893da"} Dec 08 17:58:03 crc kubenswrapper[5118]: I1208 17:58:03.024354 5118 scope.go:117] "RemoveContainer" containerID="e127f5f6ea947945bd90450d12f167e6419e8af6b0458b462fdc7e8064751458" Dec 08 17:58:04 crc kubenswrapper[5118]: I1208 17:58:04.042437 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"3d62a6f6-b57c-48e0-9279-d8dadd01a921","Type":"ContainerStarted","Data":"ee81b895036c6012904a6833b72ac5621ad0a9e15989493babe55f30e2423c84"} Dec 08 17:58:06 crc kubenswrapper[5118]: I1208 17:58:06.064261 5118 generic.go:358] "Generic (PLEG): container finished" podID="81e17e77-b0f9-4df6-8c85-e06d1fd7a46a" containerID="e1ca3709454dde0f975cd625db200e6158016352dea0eb93866c2ae48719f8b2" exitCode=0 Dec 08 17:58:06 crc kubenswrapper[5118]: I1208 17:58:06.065065 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"81e17e77-b0f9-4df6-8c85-e06d1fd7a46a","Type":"ContainerDied","Data":"e1ca3709454dde0f975cd625db200e6158016352dea0eb93866c2ae48719f8b2"} Dec 08 17:58:06 crc kubenswrapper[5118]: I1208 17:58:06.126346 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-4zrzx"] Dec 08 17:58:06 crc kubenswrapper[5118]: I1208 17:58:06.131417 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-4zrzx" Dec 08 17:58:06 crc kubenswrapper[5118]: I1208 17:58:06.134051 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-cloud1-coll-meter-proxy-tls\"" Dec 08 17:58:06 crc kubenswrapper[5118]: I1208 17:58:06.134392 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-cloud1-coll-meter-sg-core-configmap\"" Dec 08 17:58:06 crc kubenswrapper[5118]: I1208 17:58:06.134622 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"smart-gateway-session-secret\"" Dec 08 17:58:06 crc kubenswrapper[5118]: I1208 17:58:06.134779 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"smart-gateway-dockercfg-vjrnk\"" Dec 08 17:58:06 crc kubenswrapper[5118]: I1208 17:58:06.135945 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-4zrzx"] Dec 08 17:58:06 crc kubenswrapper[5118]: I1208 17:58:06.198487 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/0e2a1994-199f-4b38-903b-cba9061dfcad-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-4zrzx\" (UID: \"0e2a1994-199f-4b38-903b-cba9061dfcad\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-4zrzx" Dec 08 17:58:06 crc kubenswrapper[5118]: I1208 17:58:06.198722 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/0e2a1994-199f-4b38-903b-cba9061dfcad-session-secret\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-4zrzx\" (UID: \"0e2a1994-199f-4b38-903b-cba9061dfcad\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-4zrzx" Dec 08 17:58:06 crc kubenswrapper[5118]: I1208 17:58:06.198810 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2828m\" (UniqueName: \"kubernetes.io/projected/0e2a1994-199f-4b38-903b-cba9061dfcad-kube-api-access-2828m\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-4zrzx\" (UID: \"0e2a1994-199f-4b38-903b-cba9061dfcad\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-4zrzx" Dec 08 17:58:06 crc kubenswrapper[5118]: I1208 17:58:06.198894 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/0e2a1994-199f-4b38-903b-cba9061dfcad-sg-core-config\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-4zrzx\" (UID: \"0e2a1994-199f-4b38-903b-cba9061dfcad\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-4zrzx" Dec 08 17:58:06 crc kubenswrapper[5118]: I1208 17:58:06.198935 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/0e2a1994-199f-4b38-903b-cba9061dfcad-socket-dir\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-4zrzx\" (UID: \"0e2a1994-199f-4b38-903b-cba9061dfcad\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-4zrzx" Dec 08 17:58:06 crc kubenswrapper[5118]: I1208 17:58:06.300529 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/0e2a1994-199f-4b38-903b-cba9061dfcad-sg-core-config\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-4zrzx\" (UID: \"0e2a1994-199f-4b38-903b-cba9061dfcad\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-4zrzx" Dec 08 17:58:06 crc kubenswrapper[5118]: I1208 17:58:06.300584 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/0e2a1994-199f-4b38-903b-cba9061dfcad-socket-dir\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-4zrzx\" (UID: \"0e2a1994-199f-4b38-903b-cba9061dfcad\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-4zrzx" Dec 08 17:58:06 crc kubenswrapper[5118]: I1208 17:58:06.300659 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/0e2a1994-199f-4b38-903b-cba9061dfcad-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-4zrzx\" (UID: \"0e2a1994-199f-4b38-903b-cba9061dfcad\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-4zrzx" Dec 08 17:58:06 crc kubenswrapper[5118]: I1208 17:58:06.300722 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/0e2a1994-199f-4b38-903b-cba9061dfcad-session-secret\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-4zrzx\" (UID: \"0e2a1994-199f-4b38-903b-cba9061dfcad\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-4zrzx" Dec 08 17:58:06 crc kubenswrapper[5118]: I1208 17:58:06.300755 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2828m\" (UniqueName: \"kubernetes.io/projected/0e2a1994-199f-4b38-903b-cba9061dfcad-kube-api-access-2828m\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-4zrzx\" (UID: \"0e2a1994-199f-4b38-903b-cba9061dfcad\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-4zrzx" Dec 08 17:58:06 crc kubenswrapper[5118]: I1208 17:58:06.301404 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/0e2a1994-199f-4b38-903b-cba9061dfcad-socket-dir\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-4zrzx\" (UID: \"0e2a1994-199f-4b38-903b-cba9061dfcad\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-4zrzx" Dec 08 17:58:06 crc kubenswrapper[5118]: E1208 17:58:06.301495 5118 secret.go:189] Couldn't get secret service-telemetry/default-cloud1-coll-meter-proxy-tls: secret "default-cloud1-coll-meter-proxy-tls" not found Dec 08 17:58:06 crc kubenswrapper[5118]: E1208 17:58:06.301556 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0e2a1994-199f-4b38-903b-cba9061dfcad-default-cloud1-coll-meter-proxy-tls podName:0e2a1994-199f-4b38-903b-cba9061dfcad nodeName:}" failed. No retries permitted until 2025-12-08 17:58:06.801540613 +0000 UTC m=+943.702864707 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-cloud1-coll-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/0e2a1994-199f-4b38-903b-cba9061dfcad-default-cloud1-coll-meter-proxy-tls") pod "default-cloud1-coll-meter-smartgateway-787645d794-4zrzx" (UID: "0e2a1994-199f-4b38-903b-cba9061dfcad") : secret "default-cloud1-coll-meter-proxy-tls" not found Dec 08 17:58:06 crc kubenswrapper[5118]: I1208 17:58:06.301644 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/0e2a1994-199f-4b38-903b-cba9061dfcad-sg-core-config\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-4zrzx\" (UID: \"0e2a1994-199f-4b38-903b-cba9061dfcad\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-4zrzx" Dec 08 17:58:06 crc kubenswrapper[5118]: I1208 17:58:06.311905 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/0e2a1994-199f-4b38-903b-cba9061dfcad-session-secret\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-4zrzx\" (UID: \"0e2a1994-199f-4b38-903b-cba9061dfcad\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-4zrzx" Dec 08 17:58:06 crc kubenswrapper[5118]: I1208 17:58:06.321165 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2828m\" (UniqueName: \"kubernetes.io/projected/0e2a1994-199f-4b38-903b-cba9061dfcad-kube-api-access-2828m\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-4zrzx\" (UID: \"0e2a1994-199f-4b38-903b-cba9061dfcad\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-4zrzx" Dec 08 17:58:06 crc kubenswrapper[5118]: I1208 17:58:06.808421 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/0e2a1994-199f-4b38-903b-cba9061dfcad-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-4zrzx\" (UID: \"0e2a1994-199f-4b38-903b-cba9061dfcad\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-4zrzx" Dec 08 17:58:06 crc kubenswrapper[5118]: E1208 17:58:06.808574 5118 secret.go:189] Couldn't get secret service-telemetry/default-cloud1-coll-meter-proxy-tls: secret "default-cloud1-coll-meter-proxy-tls" not found Dec 08 17:58:06 crc kubenswrapper[5118]: E1208 17:58:06.808942 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0e2a1994-199f-4b38-903b-cba9061dfcad-default-cloud1-coll-meter-proxy-tls podName:0e2a1994-199f-4b38-903b-cba9061dfcad nodeName:}" failed. No retries permitted until 2025-12-08 17:58:07.808922578 +0000 UTC m=+944.710246672 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "default-cloud1-coll-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/0e2a1994-199f-4b38-903b-cba9061dfcad-default-cloud1-coll-meter-proxy-tls") pod "default-cloud1-coll-meter-smartgateway-787645d794-4zrzx" (UID: "0e2a1994-199f-4b38-903b-cba9061dfcad") : secret "default-cloud1-coll-meter-proxy-tls" not found Dec 08 17:58:07 crc kubenswrapper[5118]: I1208 17:58:07.824337 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/0e2a1994-199f-4b38-903b-cba9061dfcad-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-4zrzx\" (UID: \"0e2a1994-199f-4b38-903b-cba9061dfcad\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-4zrzx" Dec 08 17:58:07 crc kubenswrapper[5118]: I1208 17:58:07.830067 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/0e2a1994-199f-4b38-903b-cba9061dfcad-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-4zrzx\" (UID: \"0e2a1994-199f-4b38-903b-cba9061dfcad\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-4zrzx" Dec 08 17:58:07 crc kubenswrapper[5118]: I1208 17:58:07.949730 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-4zrzx" Dec 08 17:58:08 crc kubenswrapper[5118]: I1208 17:58:08.906735 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-kf59v"] Dec 08 17:58:08 crc kubenswrapper[5118]: I1208 17:58:08.920453 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-kf59v" Dec 08 17:58:08 crc kubenswrapper[5118]: I1208 17:58:08.923236 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-cloud1-ceil-meter-sg-core-configmap\"" Dec 08 17:58:08 crc kubenswrapper[5118]: I1208 17:58:08.923368 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-cloud1-ceil-meter-proxy-tls\"" Dec 08 17:58:08 crc kubenswrapper[5118]: I1208 17:58:08.926474 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-kf59v"] Dec 08 17:58:09 crc kubenswrapper[5118]: I1208 17:58:09.042508 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/ef58ecee-c967-4d4f-946b-8c8123a73084-session-secret\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-kf59v\" (UID: \"ef58ecee-c967-4d4f-946b-8c8123a73084\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-kf59v" Dec 08 17:58:09 crc kubenswrapper[5118]: I1208 17:58:09.042652 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/ef58ecee-c967-4d4f-946b-8c8123a73084-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-kf59v\" (UID: \"ef58ecee-c967-4d4f-946b-8c8123a73084\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-kf59v" Dec 08 17:58:09 crc kubenswrapper[5118]: I1208 17:58:09.042774 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/ef58ecee-c967-4d4f-946b-8c8123a73084-sg-core-config\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-kf59v\" (UID: \"ef58ecee-c967-4d4f-946b-8c8123a73084\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-kf59v" Dec 08 17:58:09 crc kubenswrapper[5118]: I1208 17:58:09.042894 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlrqv\" (UniqueName: \"kubernetes.io/projected/ef58ecee-c967-4d4f-946b-8c8123a73084-kube-api-access-wlrqv\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-kf59v\" (UID: \"ef58ecee-c967-4d4f-946b-8c8123a73084\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-kf59v" Dec 08 17:58:09 crc kubenswrapper[5118]: I1208 17:58:09.043129 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/ef58ecee-c967-4d4f-946b-8c8123a73084-socket-dir\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-kf59v\" (UID: \"ef58ecee-c967-4d4f-946b-8c8123a73084\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-kf59v" Dec 08 17:58:09 crc kubenswrapper[5118]: I1208 17:58:09.144302 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/ef58ecee-c967-4d4f-946b-8c8123a73084-session-secret\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-kf59v\" (UID: \"ef58ecee-c967-4d4f-946b-8c8123a73084\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-kf59v" Dec 08 17:58:09 crc kubenswrapper[5118]: I1208 17:58:09.144368 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/ef58ecee-c967-4d4f-946b-8c8123a73084-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-kf59v\" (UID: \"ef58ecee-c967-4d4f-946b-8c8123a73084\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-kf59v" Dec 08 17:58:09 crc kubenswrapper[5118]: I1208 17:58:09.144653 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/ef58ecee-c967-4d4f-946b-8c8123a73084-sg-core-config\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-kf59v\" (UID: \"ef58ecee-c967-4d4f-946b-8c8123a73084\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-kf59v" Dec 08 17:58:09 crc kubenswrapper[5118]: I1208 17:58:09.144779 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wlrqv\" (UniqueName: \"kubernetes.io/projected/ef58ecee-c967-4d4f-946b-8c8123a73084-kube-api-access-wlrqv\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-kf59v\" (UID: \"ef58ecee-c967-4d4f-946b-8c8123a73084\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-kf59v" Dec 08 17:58:09 crc kubenswrapper[5118]: I1208 17:58:09.144939 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/ef58ecee-c967-4d4f-946b-8c8123a73084-socket-dir\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-kf59v\" (UID: \"ef58ecee-c967-4d4f-946b-8c8123a73084\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-kf59v" Dec 08 17:58:09 crc kubenswrapper[5118]: E1208 17:58:09.144793 5118 secret.go:189] Couldn't get secret service-telemetry/default-cloud1-ceil-meter-proxy-tls: secret "default-cloud1-ceil-meter-proxy-tls" not found Dec 08 17:58:09 crc kubenswrapper[5118]: E1208 17:58:09.145157 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ef58ecee-c967-4d4f-946b-8c8123a73084-default-cloud1-ceil-meter-proxy-tls podName:ef58ecee-c967-4d4f-946b-8c8123a73084 nodeName:}" failed. No retries permitted until 2025-12-08 17:58:09.645120287 +0000 UTC m=+946.546444381 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-cloud1-ceil-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/ef58ecee-c967-4d4f-946b-8c8123a73084-default-cloud1-ceil-meter-proxy-tls") pod "default-cloud1-ceil-meter-smartgateway-545b564d9f-kf59v" (UID: "ef58ecee-c967-4d4f-946b-8c8123a73084") : secret "default-cloud1-ceil-meter-proxy-tls" not found Dec 08 17:58:09 crc kubenswrapper[5118]: I1208 17:58:09.145445 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/ef58ecee-c967-4d4f-946b-8c8123a73084-socket-dir\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-kf59v\" (UID: \"ef58ecee-c967-4d4f-946b-8c8123a73084\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-kf59v" Dec 08 17:58:09 crc kubenswrapper[5118]: I1208 17:58:09.145971 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/ef58ecee-c967-4d4f-946b-8c8123a73084-sg-core-config\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-kf59v\" (UID: \"ef58ecee-c967-4d4f-946b-8c8123a73084\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-kf59v" Dec 08 17:58:09 crc kubenswrapper[5118]: I1208 17:58:09.152970 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/ef58ecee-c967-4d4f-946b-8c8123a73084-session-secret\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-kf59v\" (UID: \"ef58ecee-c967-4d4f-946b-8c8123a73084\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-kf59v" Dec 08 17:58:09 crc kubenswrapper[5118]: I1208 17:58:09.161961 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlrqv\" (UniqueName: \"kubernetes.io/projected/ef58ecee-c967-4d4f-946b-8c8123a73084-kube-api-access-wlrqv\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-kf59v\" (UID: \"ef58ecee-c967-4d4f-946b-8c8123a73084\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-kf59v" Dec 08 17:58:09 crc kubenswrapper[5118]: I1208 17:58:09.653979 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/ef58ecee-c967-4d4f-946b-8c8123a73084-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-kf59v\" (UID: \"ef58ecee-c967-4d4f-946b-8c8123a73084\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-kf59v" Dec 08 17:58:09 crc kubenswrapper[5118]: E1208 17:58:09.654234 5118 secret.go:189] Couldn't get secret service-telemetry/default-cloud1-ceil-meter-proxy-tls: secret "default-cloud1-ceil-meter-proxy-tls" not found Dec 08 17:58:09 crc kubenswrapper[5118]: E1208 17:58:09.654376 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ef58ecee-c967-4d4f-946b-8c8123a73084-default-cloud1-ceil-meter-proxy-tls podName:ef58ecee-c967-4d4f-946b-8c8123a73084 nodeName:}" failed. No retries permitted until 2025-12-08 17:58:10.654347399 +0000 UTC m=+947.555671523 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "default-cloud1-ceil-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/ef58ecee-c967-4d4f-946b-8c8123a73084-default-cloud1-ceil-meter-proxy-tls") pod "default-cloud1-ceil-meter-smartgateway-545b564d9f-kf59v" (UID: "ef58ecee-c967-4d4f-946b-8c8123a73084") : secret "default-cloud1-ceil-meter-proxy-tls" not found Dec 08 17:58:10 crc kubenswrapper[5118]: I1208 17:58:10.637091 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-4zrzx"] Dec 08 17:58:10 crc kubenswrapper[5118]: I1208 17:58:10.668424 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/ef58ecee-c967-4d4f-946b-8c8123a73084-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-kf59v\" (UID: \"ef58ecee-c967-4d4f-946b-8c8123a73084\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-kf59v" Dec 08 17:58:10 crc kubenswrapper[5118]: I1208 17:58:10.675820 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/ef58ecee-c967-4d4f-946b-8c8123a73084-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-kf59v\" (UID: \"ef58ecee-c967-4d4f-946b-8c8123a73084\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-kf59v" Dec 08 17:58:10 crc kubenswrapper[5118]: I1208 17:58:10.744944 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-kf59v" Dec 08 17:58:11 crc kubenswrapper[5118]: I1208 17:58:11.101865 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"3d62a6f6-b57c-48e0-9279-d8dadd01a921","Type":"ContainerStarted","Data":"d347fa72e7ecd1e64dc39248e8ab08b737fbc63513146e89720dfbcf3e2e96ef"} Dec 08 17:58:11 crc kubenswrapper[5118]: I1208 17:58:11.104042 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-4zrzx" event={"ID":"0e2a1994-199f-4b38-903b-cba9061dfcad","Type":"ContainerStarted","Data":"3522d94343d4e36504f002acd25129011e004332fa5391e14e3ba4bc63217040"} Dec 08 17:58:11 crc kubenswrapper[5118]: I1208 17:58:11.129946 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/prometheus-default-0" podStartSLOduration=4.498879273 podStartE2EDuration="34.129919166s" podCreationTimestamp="2025-12-08 17:57:37 +0000 UTC" firstStartedPulling="2025-12-08 17:57:40.666329396 +0000 UTC m=+917.567653520" lastFinishedPulling="2025-12-08 17:58:10.297369319 +0000 UTC m=+947.198693413" observedRunningTime="2025-12-08 17:58:11.125405219 +0000 UTC m=+948.026729313" watchObservedRunningTime="2025-12-08 17:58:11.129919166 +0000 UTC m=+948.031243260" Dec 08 17:58:11 crc kubenswrapper[5118]: I1208 17:58:11.606579 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-kf59v"] Dec 08 17:58:11 crc kubenswrapper[5118]: W1208 17:58:11.625007 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podef58ecee_c967_4d4f_946b_8c8123a73084.slice/crio-15dd03737f23b7b6bcb46c7c34055a189a88ce71b67519c8210e73e1a6196a9a WatchSource:0}: Error finding container 15dd03737f23b7b6bcb46c7c34055a189a88ce71b67519c8210e73e1a6196a9a: Status 404 returned error can't find the container with id 15dd03737f23b7b6bcb46c7c34055a189a88ce71b67519c8210e73e1a6196a9a Dec 08 17:58:12 crc kubenswrapper[5118]: I1208 17:58:12.112370 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"81e17e77-b0f9-4df6-8c85-e06d1fd7a46a","Type":"ContainerStarted","Data":"47b56057ab69d50eaee94d5ad589d1657980422aeed4786992d93858dfda7b5e"} Dec 08 17:58:12 crc kubenswrapper[5118]: I1208 17:58:12.115307 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-kf59v" event={"ID":"ef58ecee-c967-4d4f-946b-8c8123a73084","Type":"ContainerStarted","Data":"15dd03737f23b7b6bcb46c7c34055a189a88ce71b67519c8210e73e1a6196a9a"} Dec 08 17:58:12 crc kubenswrapper[5118]: I1208 17:58:12.119848 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-4zrzx" event={"ID":"0e2a1994-199f-4b38-903b-cba9061dfcad","Type":"ContainerStarted","Data":"5c3e98bcff2f87a945c98b381b0613e2a751f0d75e63eb280a27b4ce39b93dc8"} Dec 08 17:58:12 crc kubenswrapper[5118]: I1208 17:58:12.998656 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-gh2mp"] Dec 08 17:58:13 crc kubenswrapper[5118]: I1208 17:58:13.139809 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-gh2mp"] Dec 08 17:58:13 crc kubenswrapper[5118]: I1208 17:58:13.139860 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-kf59v" event={"ID":"ef58ecee-c967-4d4f-946b-8c8123a73084","Type":"ContainerStarted","Data":"7907b31ed60ebeb22abca5fbcdacad14c616f75c8d4c5a868fd6cb4f261929a5"} Dec 08 17:58:13 crc kubenswrapper[5118]: I1208 17:58:13.139972 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-gh2mp" Dec 08 17:58:13 crc kubenswrapper[5118]: I1208 17:58:13.142129 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-cloud1-sens-meter-sg-core-configmap\"" Dec 08 17:58:13 crc kubenswrapper[5118]: I1208 17:58:13.142483 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-cloud1-sens-meter-proxy-tls\"" Dec 08 17:58:13 crc kubenswrapper[5118]: I1208 17:58:13.213147 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/f486b0de-c62f-46a2-8649-dca61a92506c-socket-dir\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-gh2mp\" (UID: \"f486b0de-c62f-46a2-8649-dca61a92506c\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-gh2mp" Dec 08 17:58:13 crc kubenswrapper[5118]: I1208 17:58:13.213220 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndth6\" (UniqueName: \"kubernetes.io/projected/f486b0de-c62f-46a2-8649-dca61a92506c-kube-api-access-ndth6\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-gh2mp\" (UID: \"f486b0de-c62f-46a2-8649-dca61a92506c\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-gh2mp" Dec 08 17:58:13 crc kubenswrapper[5118]: I1208 17:58:13.213257 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/f486b0de-c62f-46a2-8649-dca61a92506c-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-gh2mp\" (UID: \"f486b0de-c62f-46a2-8649-dca61a92506c\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-gh2mp" Dec 08 17:58:13 crc kubenswrapper[5118]: I1208 17:58:13.213308 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/f486b0de-c62f-46a2-8649-dca61a92506c-sg-core-config\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-gh2mp\" (UID: \"f486b0de-c62f-46a2-8649-dca61a92506c\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-gh2mp" Dec 08 17:58:13 crc kubenswrapper[5118]: I1208 17:58:13.213332 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/f486b0de-c62f-46a2-8649-dca61a92506c-session-secret\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-gh2mp\" (UID: \"f486b0de-c62f-46a2-8649-dca61a92506c\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-gh2mp" Dec 08 17:58:13 crc kubenswrapper[5118]: I1208 17:58:13.314610 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ndth6\" (UniqueName: \"kubernetes.io/projected/f486b0de-c62f-46a2-8649-dca61a92506c-kube-api-access-ndth6\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-gh2mp\" (UID: \"f486b0de-c62f-46a2-8649-dca61a92506c\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-gh2mp" Dec 08 17:58:13 crc kubenswrapper[5118]: I1208 17:58:13.314660 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/f486b0de-c62f-46a2-8649-dca61a92506c-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-gh2mp\" (UID: \"f486b0de-c62f-46a2-8649-dca61a92506c\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-gh2mp" Dec 08 17:58:13 crc kubenswrapper[5118]: I1208 17:58:13.314705 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/f486b0de-c62f-46a2-8649-dca61a92506c-sg-core-config\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-gh2mp\" (UID: \"f486b0de-c62f-46a2-8649-dca61a92506c\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-gh2mp" Dec 08 17:58:13 crc kubenswrapper[5118]: I1208 17:58:13.314721 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/f486b0de-c62f-46a2-8649-dca61a92506c-session-secret\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-gh2mp\" (UID: \"f486b0de-c62f-46a2-8649-dca61a92506c\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-gh2mp" Dec 08 17:58:13 crc kubenswrapper[5118]: I1208 17:58:13.314783 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/f486b0de-c62f-46a2-8649-dca61a92506c-socket-dir\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-gh2mp\" (UID: \"f486b0de-c62f-46a2-8649-dca61a92506c\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-gh2mp" Dec 08 17:58:13 crc kubenswrapper[5118]: E1208 17:58:13.314944 5118 secret.go:189] Couldn't get secret service-telemetry/default-cloud1-sens-meter-proxy-tls: secret "default-cloud1-sens-meter-proxy-tls" not found Dec 08 17:58:13 crc kubenswrapper[5118]: E1208 17:58:13.315015 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f486b0de-c62f-46a2-8649-dca61a92506c-default-cloud1-sens-meter-proxy-tls podName:f486b0de-c62f-46a2-8649-dca61a92506c nodeName:}" failed. No retries permitted until 2025-12-08 17:58:13.814997171 +0000 UTC m=+950.716321265 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-cloud1-sens-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/f486b0de-c62f-46a2-8649-dca61a92506c-default-cloud1-sens-meter-proxy-tls") pod "default-cloud1-sens-meter-smartgateway-66d5b7c5fc-gh2mp" (UID: "f486b0de-c62f-46a2-8649-dca61a92506c") : secret "default-cloud1-sens-meter-proxy-tls" not found Dec 08 17:58:13 crc kubenswrapper[5118]: I1208 17:58:13.315387 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/f486b0de-c62f-46a2-8649-dca61a92506c-socket-dir\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-gh2mp\" (UID: \"f486b0de-c62f-46a2-8649-dca61a92506c\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-gh2mp" Dec 08 17:58:13 crc kubenswrapper[5118]: I1208 17:58:13.315888 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/f486b0de-c62f-46a2-8649-dca61a92506c-sg-core-config\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-gh2mp\" (UID: \"f486b0de-c62f-46a2-8649-dca61a92506c\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-gh2mp" Dec 08 17:58:13 crc kubenswrapper[5118]: I1208 17:58:13.337754 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/f486b0de-c62f-46a2-8649-dca61a92506c-session-secret\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-gh2mp\" (UID: \"f486b0de-c62f-46a2-8649-dca61a92506c\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-gh2mp" Dec 08 17:58:13 crc kubenswrapper[5118]: I1208 17:58:13.338431 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ndth6\" (UniqueName: \"kubernetes.io/projected/f486b0de-c62f-46a2-8649-dca61a92506c-kube-api-access-ndth6\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-gh2mp\" (UID: \"f486b0de-c62f-46a2-8649-dca61a92506c\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-gh2mp" Dec 08 17:58:13 crc kubenswrapper[5118]: I1208 17:58:13.831691 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/f486b0de-c62f-46a2-8649-dca61a92506c-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-gh2mp\" (UID: \"f486b0de-c62f-46a2-8649-dca61a92506c\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-gh2mp" Dec 08 17:58:13 crc kubenswrapper[5118]: E1208 17:58:13.832078 5118 secret.go:189] Couldn't get secret service-telemetry/default-cloud1-sens-meter-proxy-tls: secret "default-cloud1-sens-meter-proxy-tls" not found Dec 08 17:58:13 crc kubenswrapper[5118]: E1208 17:58:13.832188 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f486b0de-c62f-46a2-8649-dca61a92506c-default-cloud1-sens-meter-proxy-tls podName:f486b0de-c62f-46a2-8649-dca61a92506c nodeName:}" failed. No retries permitted until 2025-12-08 17:58:14.832162918 +0000 UTC m=+951.733487062 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "default-cloud1-sens-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/f486b0de-c62f-46a2-8649-dca61a92506c-default-cloud1-sens-meter-proxy-tls") pod "default-cloud1-sens-meter-smartgateway-66d5b7c5fc-gh2mp" (UID: "f486b0de-c62f-46a2-8649-dca61a92506c") : secret "default-cloud1-sens-meter-proxy-tls" not found Dec 08 17:58:14 crc kubenswrapper[5118]: I1208 17:58:14.146768 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"81e17e77-b0f9-4df6-8c85-e06d1fd7a46a","Type":"ContainerStarted","Data":"d964899c9cadbc945842e48de7dab037aa53cab0c73225b3e9ef0722ad7962e1"} Dec 08 17:58:14 crc kubenswrapper[5118]: I1208 17:58:14.146811 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"81e17e77-b0f9-4df6-8c85-e06d1fd7a46a","Type":"ContainerStarted","Data":"d503ea6f67263f1580abd070342e448288584249bd193a8e3ee37d94da7770df"} Dec 08 17:58:14 crc kubenswrapper[5118]: I1208 17:58:14.188500 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/alertmanager-default-0" podStartSLOduration=15.453346177 podStartE2EDuration="23.188482557s" podCreationTimestamp="2025-12-08 17:57:51 +0000 UTC" firstStartedPulling="2025-12-08 17:58:06.06615668 +0000 UTC m=+942.967480774" lastFinishedPulling="2025-12-08 17:58:13.80129306 +0000 UTC m=+950.702617154" observedRunningTime="2025-12-08 17:58:14.180190043 +0000 UTC m=+951.081514147" watchObservedRunningTime="2025-12-08 17:58:14.188482557 +0000 UTC m=+951.089806651" Dec 08 17:58:14 crc kubenswrapper[5118]: I1208 17:58:14.845885 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/f486b0de-c62f-46a2-8649-dca61a92506c-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-gh2mp\" (UID: \"f486b0de-c62f-46a2-8649-dca61a92506c\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-gh2mp" Dec 08 17:58:14 crc kubenswrapper[5118]: I1208 17:58:14.852298 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/f486b0de-c62f-46a2-8649-dca61a92506c-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-gh2mp\" (UID: \"f486b0de-c62f-46a2-8649-dca61a92506c\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-gh2mp" Dec 08 17:58:14 crc kubenswrapper[5118]: I1208 17:58:14.961904 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-gh2mp" Dec 08 17:58:15 crc kubenswrapper[5118]: I1208 17:58:15.435441 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="service-telemetry/prometheus-default-0" Dec 08 17:58:19 crc kubenswrapper[5118]: I1208 17:58:19.453814 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-gh2mp"] Dec 08 17:58:20 crc kubenswrapper[5118]: I1208 17:58:20.114967 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-coll-event-smartgateway-d956b4648-jwkwn"] Dec 08 17:58:20 crc kubenswrapper[5118]: I1208 17:58:20.121014 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-event-smartgateway-d956b4648-jwkwn" Dec 08 17:58:20 crc kubenswrapper[5118]: I1208 17:58:20.122756 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-event-smartgateway-d956b4648-jwkwn"] Dec 08 17:58:20 crc kubenswrapper[5118]: I1208 17:58:20.124430 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-cloud1-coll-event-sg-core-configmap\"" Dec 08 17:58:20 crc kubenswrapper[5118]: I1208 17:58:20.124683 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-cert\"" Dec 08 17:58:20 crc kubenswrapper[5118]: I1208 17:58:20.165968 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/8ecda967-3335-4158-839b-9b4048b8f049-socket-dir\") pod \"default-cloud1-coll-event-smartgateway-d956b4648-jwkwn\" (UID: \"8ecda967-3335-4158-839b-9b4048b8f049\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-d956b4648-jwkwn" Dec 08 17:58:20 crc kubenswrapper[5118]: I1208 17:58:20.166032 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bx5s\" (UniqueName: \"kubernetes.io/projected/8ecda967-3335-4158-839b-9b4048b8f049-kube-api-access-2bx5s\") pod \"default-cloud1-coll-event-smartgateway-d956b4648-jwkwn\" (UID: \"8ecda967-3335-4158-839b-9b4048b8f049\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-d956b4648-jwkwn" Dec 08 17:58:20 crc kubenswrapper[5118]: I1208 17:58:20.166111 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/8ecda967-3335-4158-839b-9b4048b8f049-sg-core-config\") pod \"default-cloud1-coll-event-smartgateway-d956b4648-jwkwn\" (UID: \"8ecda967-3335-4158-839b-9b4048b8f049\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-d956b4648-jwkwn" Dec 08 17:58:20 crc kubenswrapper[5118]: I1208 17:58:20.166382 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/8ecda967-3335-4158-839b-9b4048b8f049-elastic-certs\") pod \"default-cloud1-coll-event-smartgateway-d956b4648-jwkwn\" (UID: \"8ecda967-3335-4158-839b-9b4048b8f049\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-d956b4648-jwkwn" Dec 08 17:58:20 crc kubenswrapper[5118]: I1208 17:58:20.224393 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-4zrzx" event={"ID":"0e2a1994-199f-4b38-903b-cba9061dfcad","Type":"ContainerStarted","Data":"0e9793d14f55d5b15e4d2dbe096215b1d137841ce8dbbb4a87467089f60758e4"} Dec 08 17:58:20 crc kubenswrapper[5118]: I1208 17:58:20.226150 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-gh2mp" event={"ID":"f486b0de-c62f-46a2-8649-dca61a92506c","Type":"ContainerStarted","Data":"677eeab8da7fd167f2eb41701230e28a9f38f2ba8fcffa2c3cbacf19a465cc22"} Dec 08 17:58:20 crc kubenswrapper[5118]: I1208 17:58:20.233167 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-kf59v" event={"ID":"ef58ecee-c967-4d4f-946b-8c8123a73084","Type":"ContainerStarted","Data":"b1fb6ba3cae74e0ee86043b6f3cb1e4703e934232948c758d0aff249013c2d96"} Dec 08 17:58:20 crc kubenswrapper[5118]: I1208 17:58:20.268305 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2bx5s\" (UniqueName: \"kubernetes.io/projected/8ecda967-3335-4158-839b-9b4048b8f049-kube-api-access-2bx5s\") pod \"default-cloud1-coll-event-smartgateway-d956b4648-jwkwn\" (UID: \"8ecda967-3335-4158-839b-9b4048b8f049\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-d956b4648-jwkwn" Dec 08 17:58:20 crc kubenswrapper[5118]: I1208 17:58:20.268413 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/8ecda967-3335-4158-839b-9b4048b8f049-sg-core-config\") pod \"default-cloud1-coll-event-smartgateway-d956b4648-jwkwn\" (UID: \"8ecda967-3335-4158-839b-9b4048b8f049\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-d956b4648-jwkwn" Dec 08 17:58:20 crc kubenswrapper[5118]: I1208 17:58:20.268451 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/8ecda967-3335-4158-839b-9b4048b8f049-elastic-certs\") pod \"default-cloud1-coll-event-smartgateway-d956b4648-jwkwn\" (UID: \"8ecda967-3335-4158-839b-9b4048b8f049\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-d956b4648-jwkwn" Dec 08 17:58:20 crc kubenswrapper[5118]: I1208 17:58:20.268508 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/8ecda967-3335-4158-839b-9b4048b8f049-socket-dir\") pod \"default-cloud1-coll-event-smartgateway-d956b4648-jwkwn\" (UID: \"8ecda967-3335-4158-839b-9b4048b8f049\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-d956b4648-jwkwn" Dec 08 17:58:20 crc kubenswrapper[5118]: I1208 17:58:20.269025 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/8ecda967-3335-4158-839b-9b4048b8f049-socket-dir\") pod \"default-cloud1-coll-event-smartgateway-d956b4648-jwkwn\" (UID: \"8ecda967-3335-4158-839b-9b4048b8f049\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-d956b4648-jwkwn" Dec 08 17:58:20 crc kubenswrapper[5118]: I1208 17:58:20.270980 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/8ecda967-3335-4158-839b-9b4048b8f049-sg-core-config\") pod \"default-cloud1-coll-event-smartgateway-d956b4648-jwkwn\" (UID: \"8ecda967-3335-4158-839b-9b4048b8f049\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-d956b4648-jwkwn" Dec 08 17:58:20 crc kubenswrapper[5118]: I1208 17:58:20.285092 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/8ecda967-3335-4158-839b-9b4048b8f049-elastic-certs\") pod \"default-cloud1-coll-event-smartgateway-d956b4648-jwkwn\" (UID: \"8ecda967-3335-4158-839b-9b4048b8f049\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-d956b4648-jwkwn" Dec 08 17:58:20 crc kubenswrapper[5118]: I1208 17:58:20.298356 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2bx5s\" (UniqueName: \"kubernetes.io/projected/8ecda967-3335-4158-839b-9b4048b8f049-kube-api-access-2bx5s\") pod \"default-cloud1-coll-event-smartgateway-d956b4648-jwkwn\" (UID: \"8ecda967-3335-4158-839b-9b4048b8f049\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-d956b4648-jwkwn" Dec 08 17:58:20 crc kubenswrapper[5118]: I1208 17:58:20.447018 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-event-smartgateway-d956b4648-jwkwn" Dec 08 17:58:20 crc kubenswrapper[5118]: I1208 17:58:20.905541 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-event-smartgateway-d956b4648-jwkwn"] Dec 08 17:58:20 crc kubenswrapper[5118]: W1208 17:58:20.940919 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8ecda967_3335_4158_839b_9b4048b8f049.slice/crio-fc460b0b57386679f435426a0847be0a107eb47388ad63b9c335baecad1525cb WatchSource:0}: Error finding container fc460b0b57386679f435426a0847be0a107eb47388ad63b9c335baecad1525cb: Status 404 returned error can't find the container with id fc460b0b57386679f435426a0847be0a107eb47388ad63b9c335baecad1525cb Dec 08 17:58:21 crc kubenswrapper[5118]: I1208 17:58:21.244118 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-gh2mp" event={"ID":"f486b0de-c62f-46a2-8649-dca61a92506c","Type":"ContainerStarted","Data":"e87934f74a36739e9fb6b443d1666824f83d307cdd517f3b3112c8838ef6b768"} Dec 08 17:58:21 crc kubenswrapper[5118]: I1208 17:58:21.244175 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-gh2mp" event={"ID":"f486b0de-c62f-46a2-8649-dca61a92506c","Type":"ContainerStarted","Data":"b93d45fd80d9854c47e6371943e22c048604e896742cde2d1e57590f24e5ab49"} Dec 08 17:58:21 crc kubenswrapper[5118]: I1208 17:58:21.245937 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-d956b4648-jwkwn" event={"ID":"8ecda967-3335-4158-839b-9b4048b8f049","Type":"ContainerStarted","Data":"fc460b0b57386679f435426a0847be0a107eb47388ad63b9c335baecad1525cb"} Dec 08 17:58:21 crc kubenswrapper[5118]: I1208 17:58:21.423333 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-ceil-event-smartgateway-65cf5f4bb8-q2xqk"] Dec 08 17:58:21 crc kubenswrapper[5118]: I1208 17:58:21.431716 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-65cf5f4bb8-q2xqk" Dec 08 17:58:21 crc kubenswrapper[5118]: I1208 17:58:21.437586 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-cloud1-ceil-event-sg-core-configmap\"" Dec 08 17:58:21 crc kubenswrapper[5118]: I1208 17:58:21.447730 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-event-smartgateway-65cf5f4bb8-q2xqk"] Dec 08 17:58:21 crc kubenswrapper[5118]: I1208 17:58:21.485851 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/35c3d7e4-3ad4-4184-a22e-86654ad7867b-socket-dir\") pod \"default-cloud1-ceil-event-smartgateway-65cf5f4bb8-q2xqk\" (UID: \"35c3d7e4-3ad4-4184-a22e-86654ad7867b\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-65cf5f4bb8-q2xqk" Dec 08 17:58:21 crc kubenswrapper[5118]: I1208 17:58:21.485987 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/35c3d7e4-3ad4-4184-a22e-86654ad7867b-elastic-certs\") pod \"default-cloud1-ceil-event-smartgateway-65cf5f4bb8-q2xqk\" (UID: \"35c3d7e4-3ad4-4184-a22e-86654ad7867b\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-65cf5f4bb8-q2xqk" Dec 08 17:58:21 crc kubenswrapper[5118]: I1208 17:58:21.486029 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/35c3d7e4-3ad4-4184-a22e-86654ad7867b-sg-core-config\") pod \"default-cloud1-ceil-event-smartgateway-65cf5f4bb8-q2xqk\" (UID: \"35c3d7e4-3ad4-4184-a22e-86654ad7867b\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-65cf5f4bb8-q2xqk" Dec 08 17:58:21 crc kubenswrapper[5118]: I1208 17:58:21.486077 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svplm\" (UniqueName: \"kubernetes.io/projected/35c3d7e4-3ad4-4184-a22e-86654ad7867b-kube-api-access-svplm\") pod \"default-cloud1-ceil-event-smartgateway-65cf5f4bb8-q2xqk\" (UID: \"35c3d7e4-3ad4-4184-a22e-86654ad7867b\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-65cf5f4bb8-q2xqk" Dec 08 17:58:21 crc kubenswrapper[5118]: I1208 17:58:21.587996 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/35c3d7e4-3ad4-4184-a22e-86654ad7867b-socket-dir\") pod \"default-cloud1-ceil-event-smartgateway-65cf5f4bb8-q2xqk\" (UID: \"35c3d7e4-3ad4-4184-a22e-86654ad7867b\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-65cf5f4bb8-q2xqk" Dec 08 17:58:21 crc kubenswrapper[5118]: I1208 17:58:21.588082 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/35c3d7e4-3ad4-4184-a22e-86654ad7867b-elastic-certs\") pod \"default-cloud1-ceil-event-smartgateway-65cf5f4bb8-q2xqk\" (UID: \"35c3d7e4-3ad4-4184-a22e-86654ad7867b\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-65cf5f4bb8-q2xqk" Dec 08 17:58:21 crc kubenswrapper[5118]: I1208 17:58:21.588134 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/35c3d7e4-3ad4-4184-a22e-86654ad7867b-sg-core-config\") pod \"default-cloud1-ceil-event-smartgateway-65cf5f4bb8-q2xqk\" (UID: \"35c3d7e4-3ad4-4184-a22e-86654ad7867b\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-65cf5f4bb8-q2xqk" Dec 08 17:58:21 crc kubenswrapper[5118]: I1208 17:58:21.588185 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-svplm\" (UniqueName: \"kubernetes.io/projected/35c3d7e4-3ad4-4184-a22e-86654ad7867b-kube-api-access-svplm\") pod \"default-cloud1-ceil-event-smartgateway-65cf5f4bb8-q2xqk\" (UID: \"35c3d7e4-3ad4-4184-a22e-86654ad7867b\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-65cf5f4bb8-q2xqk" Dec 08 17:58:21 crc kubenswrapper[5118]: I1208 17:58:21.590706 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/35c3d7e4-3ad4-4184-a22e-86654ad7867b-socket-dir\") pod \"default-cloud1-ceil-event-smartgateway-65cf5f4bb8-q2xqk\" (UID: \"35c3d7e4-3ad4-4184-a22e-86654ad7867b\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-65cf5f4bb8-q2xqk" Dec 08 17:58:21 crc kubenswrapper[5118]: I1208 17:58:21.593502 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/35c3d7e4-3ad4-4184-a22e-86654ad7867b-sg-core-config\") pod \"default-cloud1-ceil-event-smartgateway-65cf5f4bb8-q2xqk\" (UID: \"35c3d7e4-3ad4-4184-a22e-86654ad7867b\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-65cf5f4bb8-q2xqk" Dec 08 17:58:21 crc kubenswrapper[5118]: I1208 17:58:21.605897 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-svplm\" (UniqueName: \"kubernetes.io/projected/35c3d7e4-3ad4-4184-a22e-86654ad7867b-kube-api-access-svplm\") pod \"default-cloud1-ceil-event-smartgateway-65cf5f4bb8-q2xqk\" (UID: \"35c3d7e4-3ad4-4184-a22e-86654ad7867b\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-65cf5f4bb8-q2xqk" Dec 08 17:58:21 crc kubenswrapper[5118]: I1208 17:58:21.626133 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/35c3d7e4-3ad4-4184-a22e-86654ad7867b-elastic-certs\") pod \"default-cloud1-ceil-event-smartgateway-65cf5f4bb8-q2xqk\" (UID: \"35c3d7e4-3ad4-4184-a22e-86654ad7867b\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-65cf5f4bb8-q2xqk" Dec 08 17:58:21 crc kubenswrapper[5118]: I1208 17:58:21.799036 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-65cf5f4bb8-q2xqk" Dec 08 17:58:22 crc kubenswrapper[5118]: I1208 17:58:22.260133 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-d956b4648-jwkwn" event={"ID":"8ecda967-3335-4158-839b-9b4048b8f049","Type":"ContainerStarted","Data":"3e66f251fbfe418249b98effecd5995f8457755cc0cca38a78a65ada1af475d1"} Dec 08 17:58:25 crc kubenswrapper[5118]: I1208 17:58:25.446642 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="service-telemetry/prometheus-default-0" Dec 08 17:58:25 crc kubenswrapper[5118]: I1208 17:58:25.493014 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="service-telemetry/prometheus-default-0" Dec 08 17:58:26 crc kubenswrapper[5118]: I1208 17:58:26.347268 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/prometheus-default-0" Dec 08 17:58:34 crc kubenswrapper[5118]: I1208 17:58:34.290925 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-event-smartgateway-65cf5f4bb8-q2xqk"] Dec 08 17:58:34 crc kubenswrapper[5118]: I1208 17:58:34.368790 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-4zrzx" event={"ID":"0e2a1994-199f-4b38-903b-cba9061dfcad","Type":"ContainerStarted","Data":"8382b9a6b877c27bee6c98b5eee4b8a13ff89186f914673d42e5f5f7a058c90c"} Dec 08 17:58:34 crc kubenswrapper[5118]: I1208 17:58:34.375807 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-76n5w"] Dec 08 17:58:34 crc kubenswrapper[5118]: I1208 17:58:34.376041 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/default-interconnect-55bf8d5cb-76n5w" podUID="df9f5211-ab02-49a8-82e6-0c2f4b07bc52" containerName="default-interconnect" containerID="cri-o://23ef45f8f74a4f33cee49aff44fdf03128bfa93ad9ea0b31d1316072eb33d353" gracePeriod=30 Dec 08 17:58:34 crc kubenswrapper[5118]: I1208 17:58:34.378781 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-65cf5f4bb8-q2xqk" event={"ID":"35c3d7e4-3ad4-4184-a22e-86654ad7867b","Type":"ContainerStarted","Data":"8a2e7cdfc8f5a1739f56608ea6c7b4584cdaa12d21b19554af13cd002a673eaa"} Dec 08 17:58:34 crc kubenswrapper[5118]: I1208 17:58:34.394459 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-4zrzx" podStartSLOduration=5.136558569 podStartE2EDuration="28.394433713s" podCreationTimestamp="2025-12-08 17:58:06 +0000 UTC" firstStartedPulling="2025-12-08 17:58:10.914526599 +0000 UTC m=+947.815850693" lastFinishedPulling="2025-12-08 17:58:34.172401743 +0000 UTC m=+971.073725837" observedRunningTime="2025-12-08 17:58:34.391330032 +0000 UTC m=+971.292654126" watchObservedRunningTime="2025-12-08 17:58:34.394433713 +0000 UTC m=+971.295757817" Dec 08 17:58:34 crc kubenswrapper[5118]: I1208 17:58:34.720389 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-76n5w" Dec 08 17:58:34 crc kubenswrapper[5118]: I1208 17:58:34.747227 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-rwr2k"] Dec 08 17:58:34 crc kubenswrapper[5118]: I1208 17:58:34.748045 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="df9f5211-ab02-49a8-82e6-0c2f4b07bc52" containerName="default-interconnect" Dec 08 17:58:34 crc kubenswrapper[5118]: I1208 17:58:34.748067 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="df9f5211-ab02-49a8-82e6-0c2f4b07bc52" containerName="default-interconnect" Dec 08 17:58:34 crc kubenswrapper[5118]: I1208 17:58:34.748228 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="df9f5211-ab02-49a8-82e6-0c2f4b07bc52" containerName="default-interconnect" Dec 08 17:58:34 crc kubenswrapper[5118]: I1208 17:58:34.752721 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-rwr2k" Dec 08 17:58:34 crc kubenswrapper[5118]: I1208 17:58:34.769002 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-rwr2k"] Dec 08 17:58:34 crc kubenswrapper[5118]: I1208 17:58:34.817371 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/df9f5211-ab02-49a8-82e6-0c2f4b07bc52-sasl-users\") pod \"df9f5211-ab02-49a8-82e6-0c2f4b07bc52\" (UID: \"df9f5211-ab02-49a8-82e6-0c2f4b07bc52\") " Dec 08 17:58:34 crc kubenswrapper[5118]: I1208 17:58:34.817411 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/df9f5211-ab02-49a8-82e6-0c2f4b07bc52-default-interconnect-inter-router-credentials\") pod \"df9f5211-ab02-49a8-82e6-0c2f4b07bc52\" (UID: \"df9f5211-ab02-49a8-82e6-0c2f4b07bc52\") " Dec 08 17:58:34 crc kubenswrapper[5118]: I1208 17:58:34.817492 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/df9f5211-ab02-49a8-82e6-0c2f4b07bc52-default-interconnect-openstack-ca\") pod \"df9f5211-ab02-49a8-82e6-0c2f4b07bc52\" (UID: \"df9f5211-ab02-49a8-82e6-0c2f4b07bc52\") " Dec 08 17:58:34 crc kubenswrapper[5118]: I1208 17:58:34.817511 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rhpm6\" (UniqueName: \"kubernetes.io/projected/df9f5211-ab02-49a8-82e6-0c2f4b07bc52-kube-api-access-rhpm6\") pod \"df9f5211-ab02-49a8-82e6-0c2f4b07bc52\" (UID: \"df9f5211-ab02-49a8-82e6-0c2f4b07bc52\") " Dec 08 17:58:34 crc kubenswrapper[5118]: I1208 17:58:34.817562 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/df9f5211-ab02-49a8-82e6-0c2f4b07bc52-default-interconnect-inter-router-ca\") pod \"df9f5211-ab02-49a8-82e6-0c2f4b07bc52\" (UID: \"df9f5211-ab02-49a8-82e6-0c2f4b07bc52\") " Dec 08 17:58:34 crc kubenswrapper[5118]: I1208 17:58:34.817640 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/df9f5211-ab02-49a8-82e6-0c2f4b07bc52-sasl-config\") pod \"df9f5211-ab02-49a8-82e6-0c2f4b07bc52\" (UID: \"df9f5211-ab02-49a8-82e6-0c2f4b07bc52\") " Dec 08 17:58:34 crc kubenswrapper[5118]: I1208 17:58:34.817686 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/df9f5211-ab02-49a8-82e6-0c2f4b07bc52-default-interconnect-openstack-credentials\") pod \"df9f5211-ab02-49a8-82e6-0c2f4b07bc52\" (UID: \"df9f5211-ab02-49a8-82e6-0c2f4b07bc52\") " Dec 08 17:58:34 crc kubenswrapper[5118]: I1208 17:58:34.819364 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df9f5211-ab02-49a8-82e6-0c2f4b07bc52-sasl-config" (OuterVolumeSpecName: "sasl-config") pod "df9f5211-ab02-49a8-82e6-0c2f4b07bc52" (UID: "df9f5211-ab02-49a8-82e6-0c2f4b07bc52"). InnerVolumeSpecName "sasl-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:58:34 crc kubenswrapper[5118]: I1208 17:58:34.825280 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df9f5211-ab02-49a8-82e6-0c2f4b07bc52-kube-api-access-rhpm6" (OuterVolumeSpecName: "kube-api-access-rhpm6") pod "df9f5211-ab02-49a8-82e6-0c2f4b07bc52" (UID: "df9f5211-ab02-49a8-82e6-0c2f4b07bc52"). InnerVolumeSpecName "kube-api-access-rhpm6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:58:34 crc kubenswrapper[5118]: I1208 17:58:34.826395 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df9f5211-ab02-49a8-82e6-0c2f4b07bc52-default-interconnect-inter-router-credentials" (OuterVolumeSpecName: "default-interconnect-inter-router-credentials") pod "df9f5211-ab02-49a8-82e6-0c2f4b07bc52" (UID: "df9f5211-ab02-49a8-82e6-0c2f4b07bc52"). InnerVolumeSpecName "default-interconnect-inter-router-credentials". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:58:34 crc kubenswrapper[5118]: I1208 17:58:34.829013 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df9f5211-ab02-49a8-82e6-0c2f4b07bc52-default-interconnect-inter-router-ca" (OuterVolumeSpecName: "default-interconnect-inter-router-ca") pod "df9f5211-ab02-49a8-82e6-0c2f4b07bc52" (UID: "df9f5211-ab02-49a8-82e6-0c2f4b07bc52"). InnerVolumeSpecName "default-interconnect-inter-router-ca". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:58:34 crc kubenswrapper[5118]: I1208 17:58:34.829232 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df9f5211-ab02-49a8-82e6-0c2f4b07bc52-sasl-users" (OuterVolumeSpecName: "sasl-users") pod "df9f5211-ab02-49a8-82e6-0c2f4b07bc52" (UID: "df9f5211-ab02-49a8-82e6-0c2f4b07bc52"). InnerVolumeSpecName "sasl-users". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:58:34 crc kubenswrapper[5118]: I1208 17:58:34.829341 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df9f5211-ab02-49a8-82e6-0c2f4b07bc52-default-interconnect-openstack-credentials" (OuterVolumeSpecName: "default-interconnect-openstack-credentials") pod "df9f5211-ab02-49a8-82e6-0c2f4b07bc52" (UID: "df9f5211-ab02-49a8-82e6-0c2f4b07bc52"). InnerVolumeSpecName "default-interconnect-openstack-credentials". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:58:34 crc kubenswrapper[5118]: I1208 17:58:34.829457 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df9f5211-ab02-49a8-82e6-0c2f4b07bc52-default-interconnect-openstack-ca" (OuterVolumeSpecName: "default-interconnect-openstack-ca") pod "df9f5211-ab02-49a8-82e6-0c2f4b07bc52" (UID: "df9f5211-ab02-49a8-82e6-0c2f4b07bc52"). InnerVolumeSpecName "default-interconnect-openstack-ca". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:58:34 crc kubenswrapper[5118]: I1208 17:58:34.918821 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/d839602b-f183-45c8-af76-72a0d292aa33-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-rwr2k\" (UID: \"d839602b-f183-45c8-af76-72a0d292aa33\") " pod="service-telemetry/default-interconnect-55bf8d5cb-rwr2k" Dec 08 17:58:34 crc kubenswrapper[5118]: I1208 17:58:34.918940 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/d839602b-f183-45c8-af76-72a0d292aa33-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-rwr2k\" (UID: \"d839602b-f183-45c8-af76-72a0d292aa33\") " pod="service-telemetry/default-interconnect-55bf8d5cb-rwr2k" Dec 08 17:58:34 crc kubenswrapper[5118]: I1208 17:58:34.918967 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t47sc\" (UniqueName: \"kubernetes.io/projected/d839602b-f183-45c8-af76-72a0d292aa33-kube-api-access-t47sc\") pod \"default-interconnect-55bf8d5cb-rwr2k\" (UID: \"d839602b-f183-45c8-af76-72a0d292aa33\") " pod="service-telemetry/default-interconnect-55bf8d5cb-rwr2k" Dec 08 17:58:34 crc kubenswrapper[5118]: I1208 17:58:34.919009 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/d839602b-f183-45c8-af76-72a0d292aa33-sasl-config\") pod \"default-interconnect-55bf8d5cb-rwr2k\" (UID: \"d839602b-f183-45c8-af76-72a0d292aa33\") " pod="service-telemetry/default-interconnect-55bf8d5cb-rwr2k" Dec 08 17:58:34 crc kubenswrapper[5118]: I1208 17:58:34.919126 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/d839602b-f183-45c8-af76-72a0d292aa33-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-rwr2k\" (UID: \"d839602b-f183-45c8-af76-72a0d292aa33\") " pod="service-telemetry/default-interconnect-55bf8d5cb-rwr2k" Dec 08 17:58:34 crc kubenswrapper[5118]: I1208 17:58:34.919184 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/d839602b-f183-45c8-af76-72a0d292aa33-sasl-users\") pod \"default-interconnect-55bf8d5cb-rwr2k\" (UID: \"d839602b-f183-45c8-af76-72a0d292aa33\") " pod="service-telemetry/default-interconnect-55bf8d5cb-rwr2k" Dec 08 17:58:34 crc kubenswrapper[5118]: I1208 17:58:34.919216 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/d839602b-f183-45c8-af76-72a0d292aa33-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-rwr2k\" (UID: \"d839602b-f183-45c8-af76-72a0d292aa33\") " pod="service-telemetry/default-interconnect-55bf8d5cb-rwr2k" Dec 08 17:58:34 crc kubenswrapper[5118]: I1208 17:58:34.919323 5118 reconciler_common.go:299] "Volume detached for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/df9f5211-ab02-49a8-82e6-0c2f4b07bc52-sasl-config\") on node \"crc\" DevicePath \"\"" Dec 08 17:58:34 crc kubenswrapper[5118]: I1208 17:58:34.919346 5118 reconciler_common.go:299] "Volume detached for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/df9f5211-ab02-49a8-82e6-0c2f4b07bc52-default-interconnect-openstack-credentials\") on node \"crc\" DevicePath \"\"" Dec 08 17:58:34 crc kubenswrapper[5118]: I1208 17:58:34.919361 5118 reconciler_common.go:299] "Volume detached for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/df9f5211-ab02-49a8-82e6-0c2f4b07bc52-sasl-users\") on node \"crc\" DevicePath \"\"" Dec 08 17:58:34 crc kubenswrapper[5118]: I1208 17:58:34.919374 5118 reconciler_common.go:299] "Volume detached for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/df9f5211-ab02-49a8-82e6-0c2f4b07bc52-default-interconnect-inter-router-credentials\") on node \"crc\" DevicePath \"\"" Dec 08 17:58:34 crc kubenswrapper[5118]: I1208 17:58:34.919416 5118 reconciler_common.go:299] "Volume detached for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/df9f5211-ab02-49a8-82e6-0c2f4b07bc52-default-interconnect-openstack-ca\") on node \"crc\" DevicePath \"\"" Dec 08 17:58:34 crc kubenswrapper[5118]: I1208 17:58:34.919431 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rhpm6\" (UniqueName: \"kubernetes.io/projected/df9f5211-ab02-49a8-82e6-0c2f4b07bc52-kube-api-access-rhpm6\") on node \"crc\" DevicePath \"\"" Dec 08 17:58:34 crc kubenswrapper[5118]: I1208 17:58:34.919444 5118 reconciler_common.go:299] "Volume detached for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/df9f5211-ab02-49a8-82e6-0c2f4b07bc52-default-interconnect-inter-router-ca\") on node \"crc\" DevicePath \"\"" Dec 08 17:58:35 crc kubenswrapper[5118]: I1208 17:58:35.020326 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/d839602b-f183-45c8-af76-72a0d292aa33-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-rwr2k\" (UID: \"d839602b-f183-45c8-af76-72a0d292aa33\") " pod="service-telemetry/default-interconnect-55bf8d5cb-rwr2k" Dec 08 17:58:35 crc kubenswrapper[5118]: I1208 17:58:35.020396 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-t47sc\" (UniqueName: \"kubernetes.io/projected/d839602b-f183-45c8-af76-72a0d292aa33-kube-api-access-t47sc\") pod \"default-interconnect-55bf8d5cb-rwr2k\" (UID: \"d839602b-f183-45c8-af76-72a0d292aa33\") " pod="service-telemetry/default-interconnect-55bf8d5cb-rwr2k" Dec 08 17:58:35 crc kubenswrapper[5118]: I1208 17:58:35.020465 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/d839602b-f183-45c8-af76-72a0d292aa33-sasl-config\") pod \"default-interconnect-55bf8d5cb-rwr2k\" (UID: \"d839602b-f183-45c8-af76-72a0d292aa33\") " pod="service-telemetry/default-interconnect-55bf8d5cb-rwr2k" Dec 08 17:58:35 crc kubenswrapper[5118]: I1208 17:58:35.020527 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/d839602b-f183-45c8-af76-72a0d292aa33-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-rwr2k\" (UID: \"d839602b-f183-45c8-af76-72a0d292aa33\") " pod="service-telemetry/default-interconnect-55bf8d5cb-rwr2k" Dec 08 17:58:35 crc kubenswrapper[5118]: I1208 17:58:35.020554 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/d839602b-f183-45c8-af76-72a0d292aa33-sasl-users\") pod \"default-interconnect-55bf8d5cb-rwr2k\" (UID: \"d839602b-f183-45c8-af76-72a0d292aa33\") " pod="service-telemetry/default-interconnect-55bf8d5cb-rwr2k" Dec 08 17:58:35 crc kubenswrapper[5118]: I1208 17:58:35.020578 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/d839602b-f183-45c8-af76-72a0d292aa33-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-rwr2k\" (UID: \"d839602b-f183-45c8-af76-72a0d292aa33\") " pod="service-telemetry/default-interconnect-55bf8d5cb-rwr2k" Dec 08 17:58:35 crc kubenswrapper[5118]: I1208 17:58:35.020926 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/d839602b-f183-45c8-af76-72a0d292aa33-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-rwr2k\" (UID: \"d839602b-f183-45c8-af76-72a0d292aa33\") " pod="service-telemetry/default-interconnect-55bf8d5cb-rwr2k" Dec 08 17:58:35 crc kubenswrapper[5118]: I1208 17:58:35.022089 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/d839602b-f183-45c8-af76-72a0d292aa33-sasl-config\") pod \"default-interconnect-55bf8d5cb-rwr2k\" (UID: \"d839602b-f183-45c8-af76-72a0d292aa33\") " pod="service-telemetry/default-interconnect-55bf8d5cb-rwr2k" Dec 08 17:58:35 crc kubenswrapper[5118]: I1208 17:58:35.027865 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/d839602b-f183-45c8-af76-72a0d292aa33-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-rwr2k\" (UID: \"d839602b-f183-45c8-af76-72a0d292aa33\") " pod="service-telemetry/default-interconnect-55bf8d5cb-rwr2k" Dec 08 17:58:35 crc kubenswrapper[5118]: I1208 17:58:35.028296 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/d839602b-f183-45c8-af76-72a0d292aa33-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-rwr2k\" (UID: \"d839602b-f183-45c8-af76-72a0d292aa33\") " pod="service-telemetry/default-interconnect-55bf8d5cb-rwr2k" Dec 08 17:58:35 crc kubenswrapper[5118]: I1208 17:58:35.029077 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/d839602b-f183-45c8-af76-72a0d292aa33-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-rwr2k\" (UID: \"d839602b-f183-45c8-af76-72a0d292aa33\") " pod="service-telemetry/default-interconnect-55bf8d5cb-rwr2k" Dec 08 17:58:35 crc kubenswrapper[5118]: I1208 17:58:35.029991 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/d839602b-f183-45c8-af76-72a0d292aa33-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-rwr2k\" (UID: \"d839602b-f183-45c8-af76-72a0d292aa33\") " pod="service-telemetry/default-interconnect-55bf8d5cb-rwr2k" Dec 08 17:58:35 crc kubenswrapper[5118]: I1208 17:58:35.030540 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/d839602b-f183-45c8-af76-72a0d292aa33-sasl-users\") pod \"default-interconnect-55bf8d5cb-rwr2k\" (UID: \"d839602b-f183-45c8-af76-72a0d292aa33\") " pod="service-telemetry/default-interconnect-55bf8d5cb-rwr2k" Dec 08 17:58:35 crc kubenswrapper[5118]: I1208 17:58:35.046147 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-t47sc\" (UniqueName: \"kubernetes.io/projected/d839602b-f183-45c8-af76-72a0d292aa33-kube-api-access-t47sc\") pod \"default-interconnect-55bf8d5cb-rwr2k\" (UID: \"d839602b-f183-45c8-af76-72a0d292aa33\") " pod="service-telemetry/default-interconnect-55bf8d5cb-rwr2k" Dec 08 17:58:35 crc kubenswrapper[5118]: I1208 17:58:35.075592 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-rwr2k" Dec 08 17:58:35 crc kubenswrapper[5118]: I1208 17:58:35.389966 5118 generic.go:358] "Generic (PLEG): container finished" podID="df9f5211-ab02-49a8-82e6-0c2f4b07bc52" containerID="23ef45f8f74a4f33cee49aff44fdf03128bfa93ad9ea0b31d1316072eb33d353" exitCode=0 Dec 08 17:58:35 crc kubenswrapper[5118]: I1208 17:58:35.390128 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-76n5w" event={"ID":"df9f5211-ab02-49a8-82e6-0c2f4b07bc52","Type":"ContainerDied","Data":"23ef45f8f74a4f33cee49aff44fdf03128bfa93ad9ea0b31d1316072eb33d353"} Dec 08 17:58:35 crc kubenswrapper[5118]: I1208 17:58:35.390371 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-76n5w" event={"ID":"df9f5211-ab02-49a8-82e6-0c2f4b07bc52","Type":"ContainerDied","Data":"b9eccbab184d45ec8b09d562fad481be8c01d5ebbe8dc86e74b7741c2826ecb8"} Dec 08 17:58:35 crc kubenswrapper[5118]: I1208 17:58:35.390422 5118 scope.go:117] "RemoveContainer" containerID="23ef45f8f74a4f33cee49aff44fdf03128bfa93ad9ea0b31d1316072eb33d353" Dec 08 17:58:35 crc kubenswrapper[5118]: I1208 17:58:35.390175 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-76n5w" Dec 08 17:58:35 crc kubenswrapper[5118]: I1208 17:58:35.404070 5118 generic.go:358] "Generic (PLEG): container finished" podID="0e2a1994-199f-4b38-903b-cba9061dfcad" containerID="0e9793d14f55d5b15e4d2dbe096215b1d137841ce8dbbb4a87467089f60758e4" exitCode=0 Dec 08 17:58:35 crc kubenswrapper[5118]: I1208 17:58:35.404119 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-4zrzx" event={"ID":"0e2a1994-199f-4b38-903b-cba9061dfcad","Type":"ContainerDied","Data":"0e9793d14f55d5b15e4d2dbe096215b1d137841ce8dbbb4a87467089f60758e4"} Dec 08 17:58:35 crc kubenswrapper[5118]: I1208 17:58:35.404779 5118 scope.go:117] "RemoveContainer" containerID="0e9793d14f55d5b15e4d2dbe096215b1d137841ce8dbbb4a87467089f60758e4" Dec 08 17:58:35 crc kubenswrapper[5118]: I1208 17:58:35.411252 5118 generic.go:358] "Generic (PLEG): container finished" podID="f486b0de-c62f-46a2-8649-dca61a92506c" containerID="e87934f74a36739e9fb6b443d1666824f83d307cdd517f3b3112c8838ef6b768" exitCode=0 Dec 08 17:58:35 crc kubenswrapper[5118]: I1208 17:58:35.411329 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-gh2mp" event={"ID":"f486b0de-c62f-46a2-8649-dca61a92506c","Type":"ContainerDied","Data":"e87934f74a36739e9fb6b443d1666824f83d307cdd517f3b3112c8838ef6b768"} Dec 08 17:58:35 crc kubenswrapper[5118]: I1208 17:58:35.411397 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-gh2mp" event={"ID":"f486b0de-c62f-46a2-8649-dca61a92506c","Type":"ContainerStarted","Data":"5fbe44206cb5eaed099304c50789cc259f31beb0bc469c4c34ef038e6233d43d"} Dec 08 17:58:35 crc kubenswrapper[5118]: I1208 17:58:35.411842 5118 scope.go:117] "RemoveContainer" containerID="e87934f74a36739e9fb6b443d1666824f83d307cdd517f3b3112c8838ef6b768" Dec 08 17:58:35 crc kubenswrapper[5118]: I1208 17:58:35.416321 5118 generic.go:358] "Generic (PLEG): container finished" podID="ef58ecee-c967-4d4f-946b-8c8123a73084" containerID="b1fb6ba3cae74e0ee86043b6f3cb1e4703e934232948c758d0aff249013c2d96" exitCode=0 Dec 08 17:58:35 crc kubenswrapper[5118]: I1208 17:58:35.416448 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-kf59v" event={"ID":"ef58ecee-c967-4d4f-946b-8c8123a73084","Type":"ContainerDied","Data":"b1fb6ba3cae74e0ee86043b6f3cb1e4703e934232948c758d0aff249013c2d96"} Dec 08 17:58:35 crc kubenswrapper[5118]: I1208 17:58:35.416478 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-kf59v" event={"ID":"ef58ecee-c967-4d4f-946b-8c8123a73084","Type":"ContainerStarted","Data":"14f9abfc42b74a6f617ce3b9fb37ab19ecb2e7388bb02af946de77d393bdb9b8"} Dec 08 17:58:35 crc kubenswrapper[5118]: I1208 17:58:35.417349 5118 scope.go:117] "RemoveContainer" containerID="b1fb6ba3cae74e0ee86043b6f3cb1e4703e934232948c758d0aff249013c2d96" Dec 08 17:58:35 crc kubenswrapper[5118]: I1208 17:58:35.454499 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-65cf5f4bb8-q2xqk" event={"ID":"35c3d7e4-3ad4-4184-a22e-86654ad7867b","Type":"ContainerStarted","Data":"fbab11af1b00ff69ef149b7b273830c82b832c32137a73fe9dd8db6daa1ed8c3"} Dec 08 17:58:35 crc kubenswrapper[5118]: I1208 17:58:35.458085 5118 scope.go:117] "RemoveContainer" containerID="23ef45f8f74a4f33cee49aff44fdf03128bfa93ad9ea0b31d1316072eb33d353" Dec 08 17:58:35 crc kubenswrapper[5118]: E1208 17:58:35.458558 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"23ef45f8f74a4f33cee49aff44fdf03128bfa93ad9ea0b31d1316072eb33d353\": container with ID starting with 23ef45f8f74a4f33cee49aff44fdf03128bfa93ad9ea0b31d1316072eb33d353 not found: ID does not exist" containerID="23ef45f8f74a4f33cee49aff44fdf03128bfa93ad9ea0b31d1316072eb33d353" Dec 08 17:58:35 crc kubenswrapper[5118]: I1208 17:58:35.458611 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23ef45f8f74a4f33cee49aff44fdf03128bfa93ad9ea0b31d1316072eb33d353"} err="failed to get container status \"23ef45f8f74a4f33cee49aff44fdf03128bfa93ad9ea0b31d1316072eb33d353\": rpc error: code = NotFound desc = could not find container \"23ef45f8f74a4f33cee49aff44fdf03128bfa93ad9ea0b31d1316072eb33d353\": container with ID starting with 23ef45f8f74a4f33cee49aff44fdf03128bfa93ad9ea0b31d1316072eb33d353 not found: ID does not exist" Dec 08 17:58:35 crc kubenswrapper[5118]: I1208 17:58:35.459366 5118 generic.go:358] "Generic (PLEG): container finished" podID="8ecda967-3335-4158-839b-9b4048b8f049" containerID="3e66f251fbfe418249b98effecd5995f8457755cc0cca38a78a65ada1af475d1" exitCode=0 Dec 08 17:58:35 crc kubenswrapper[5118]: I1208 17:58:35.459604 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-d956b4648-jwkwn" event={"ID":"8ecda967-3335-4158-839b-9b4048b8f049","Type":"ContainerDied","Data":"3e66f251fbfe418249b98effecd5995f8457755cc0cca38a78a65ada1af475d1"} Dec 08 17:58:35 crc kubenswrapper[5118]: I1208 17:58:35.459670 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-d956b4648-jwkwn" event={"ID":"8ecda967-3335-4158-839b-9b4048b8f049","Type":"ContainerStarted","Data":"269101b95debae1db230c25bfa8ad7cd5db3eaa90da0c90ea3fda968096073bb"} Dec 08 17:58:35 crc kubenswrapper[5118]: I1208 17:58:35.459786 5118 scope.go:117] "RemoveContainer" containerID="3e66f251fbfe418249b98effecd5995f8457755cc0cca38a78a65ada1af475d1" Dec 08 17:58:35 crc kubenswrapper[5118]: I1208 17:58:35.518566 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-76n5w"] Dec 08 17:58:35 crc kubenswrapper[5118]: I1208 17:58:35.527442 5118 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-76n5w"] Dec 08 17:58:35 crc kubenswrapper[5118]: I1208 17:58:35.552285 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-rwr2k"] Dec 08 17:58:36 crc kubenswrapper[5118]: I1208 17:58:36.470523 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-4zrzx" event={"ID":"0e2a1994-199f-4b38-903b-cba9061dfcad","Type":"ContainerStarted","Data":"ccb2417346bcabce07b84d961e110a73d232ebc2d4fce2978412cecb91e94e38"} Dec 08 17:58:36 crc kubenswrapper[5118]: I1208 17:58:36.473678 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-gh2mp" event={"ID":"f486b0de-c62f-46a2-8649-dca61a92506c","Type":"ContainerStarted","Data":"22081593e01430234cfb5dbf7c618890033249b7e77d517269dcbe6686c62ed4"} Dec 08 17:58:36 crc kubenswrapper[5118]: I1208 17:58:36.476161 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-rwr2k" event={"ID":"d839602b-f183-45c8-af76-72a0d292aa33","Type":"ContainerStarted","Data":"4ed1d85bf0b758a1fda41070aaab1f2a6869067e761846e0bd5bb9fb92174804"} Dec 08 17:58:36 crc kubenswrapper[5118]: I1208 17:58:36.476316 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-rwr2k" event={"ID":"d839602b-f183-45c8-af76-72a0d292aa33","Type":"ContainerStarted","Data":"ad1af3fce6542ee2b9720079ff3e519c97ea182e4f44abb305379e55fa862060"} Dec 08 17:58:36 crc kubenswrapper[5118]: I1208 17:58:36.481596 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-kf59v" event={"ID":"ef58ecee-c967-4d4f-946b-8c8123a73084","Type":"ContainerStarted","Data":"4aa1569377aee5369fcf43349de769f0f5fad3e99ebbedfb73df81ee6b8c54f6"} Dec 08 17:58:36 crc kubenswrapper[5118]: I1208 17:58:36.483806 5118 generic.go:358] "Generic (PLEG): container finished" podID="35c3d7e4-3ad4-4184-a22e-86654ad7867b" containerID="fbab11af1b00ff69ef149b7b273830c82b832c32137a73fe9dd8db6daa1ed8c3" exitCode=0 Dec 08 17:58:36 crc kubenswrapper[5118]: I1208 17:58:36.484027 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-65cf5f4bb8-q2xqk" event={"ID":"35c3d7e4-3ad4-4184-a22e-86654ad7867b","Type":"ContainerDied","Data":"fbab11af1b00ff69ef149b7b273830c82b832c32137a73fe9dd8db6daa1ed8c3"} Dec 08 17:58:36 crc kubenswrapper[5118]: I1208 17:58:36.484237 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-65cf5f4bb8-q2xqk" event={"ID":"35c3d7e4-3ad4-4184-a22e-86654ad7867b","Type":"ContainerStarted","Data":"0e6cedcd862cd28b09ea5cf8da497c04c433583582df96033357f6d68f4581de"} Dec 08 17:58:36 crc kubenswrapper[5118]: I1208 17:58:36.485081 5118 scope.go:117] "RemoveContainer" containerID="fbab11af1b00ff69ef149b7b273830c82b832c32137a73fe9dd8db6daa1ed8c3" Dec 08 17:58:36 crc kubenswrapper[5118]: I1208 17:58:36.487018 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-d956b4648-jwkwn" event={"ID":"8ecda967-3335-4158-839b-9b4048b8f049","Type":"ContainerStarted","Data":"31feeea8562884312474580d6cb82af6829f58ad5da6bd017e3939a9787a0721"} Dec 08 17:58:36 crc kubenswrapper[5118]: I1208 17:58:36.538843 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-coll-event-smartgateway-d956b4648-jwkwn" podStartSLOduration=1.355359418 podStartE2EDuration="16.538809055s" podCreationTimestamp="2025-12-08 17:58:20 +0000 UTC" firstStartedPulling="2025-12-08 17:58:20.942931839 +0000 UTC m=+957.844255923" lastFinishedPulling="2025-12-08 17:58:36.126381466 +0000 UTC m=+973.027705560" observedRunningTime="2025-12-08 17:58:36.517652179 +0000 UTC m=+973.418976283" watchObservedRunningTime="2025-12-08 17:58:36.538809055 +0000 UTC m=+973.440133159" Dec 08 17:58:36 crc kubenswrapper[5118]: I1208 17:58:36.552179 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-interconnect-55bf8d5cb-rwr2k" podStartSLOduration=2.55215286 podStartE2EDuration="2.55215286s" podCreationTimestamp="2025-12-08 17:58:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:58:36.544453681 +0000 UTC m=+973.445777785" watchObservedRunningTime="2025-12-08 17:58:36.55215286 +0000 UTC m=+973.453476954" Dec 08 17:58:36 crc kubenswrapper[5118]: I1208 17:58:36.595366 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-kf59v" podStartSLOduration=4.376012712 podStartE2EDuration="28.595341896s" podCreationTimestamp="2025-12-08 17:58:08 +0000 UTC" firstStartedPulling="2025-12-08 17:58:11.626751697 +0000 UTC m=+948.528075791" lastFinishedPulling="2025-12-08 17:58:35.846080881 +0000 UTC m=+972.747404975" observedRunningTime="2025-12-08 17:58:36.576243402 +0000 UTC m=+973.477567536" watchObservedRunningTime="2025-12-08 17:58:36.595341896 +0000 UTC m=+973.496665990" Dec 08 17:58:36 crc kubenswrapper[5118]: I1208 17:58:36.622188 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-gh2mp" podStartSLOduration=8.257528106 podStartE2EDuration="24.62216181s" podCreationTimestamp="2025-12-08 17:58:12 +0000 UTC" firstStartedPulling="2025-12-08 17:58:19.46299785 +0000 UTC m=+956.364321944" lastFinishedPulling="2025-12-08 17:58:35.827631514 +0000 UTC m=+972.728955648" observedRunningTime="2025-12-08 17:58:36.609703737 +0000 UTC m=+973.511027831" watchObservedRunningTime="2025-12-08 17:58:36.62216181 +0000 UTC m=+973.523485904" Dec 08 17:58:37 crc kubenswrapper[5118]: I1208 17:58:37.435741 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df9f5211-ab02-49a8-82e6-0c2f4b07bc52" path="/var/lib/kubelet/pods/df9f5211-ab02-49a8-82e6-0c2f4b07bc52/volumes" Dec 08 17:58:37 crc kubenswrapper[5118]: I1208 17:58:37.496032 5118 generic.go:358] "Generic (PLEG): container finished" podID="ef58ecee-c967-4d4f-946b-8c8123a73084" containerID="4aa1569377aee5369fcf43349de769f0f5fad3e99ebbedfb73df81ee6b8c54f6" exitCode=0 Dec 08 17:58:37 crc kubenswrapper[5118]: I1208 17:58:37.496144 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-kf59v" event={"ID":"ef58ecee-c967-4d4f-946b-8c8123a73084","Type":"ContainerDied","Data":"4aa1569377aee5369fcf43349de769f0f5fad3e99ebbedfb73df81ee6b8c54f6"} Dec 08 17:58:37 crc kubenswrapper[5118]: I1208 17:58:37.496497 5118 scope.go:117] "RemoveContainer" containerID="4aa1569377aee5369fcf43349de769f0f5fad3e99ebbedfb73df81ee6b8c54f6" Dec 08 17:58:37 crc kubenswrapper[5118]: I1208 17:58:37.497004 5118 scope.go:117] "RemoveContainer" containerID="b1fb6ba3cae74e0ee86043b6f3cb1e4703e934232948c758d0aff249013c2d96" Dec 08 17:58:37 crc kubenswrapper[5118]: E1208 17:58:37.497200 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-ceil-meter-smartgateway-545b564d9f-kf59v_service-telemetry(ef58ecee-c967-4d4f-946b-8c8123a73084)\"" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-kf59v" podUID="ef58ecee-c967-4d4f-946b-8c8123a73084" Dec 08 17:58:37 crc kubenswrapper[5118]: I1208 17:58:37.499412 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-65cf5f4bb8-q2xqk" event={"ID":"35c3d7e4-3ad4-4184-a22e-86654ad7867b","Type":"ContainerStarted","Data":"7c20eecf4a107ff956d250900c9a4c7fdcedf6cb6ad2e39b1a081f57fd9d46ee"} Dec 08 17:58:37 crc kubenswrapper[5118]: I1208 17:58:37.507604 5118 generic.go:358] "Generic (PLEG): container finished" podID="8ecda967-3335-4158-839b-9b4048b8f049" containerID="31feeea8562884312474580d6cb82af6829f58ad5da6bd017e3939a9787a0721" exitCode=0 Dec 08 17:58:37 crc kubenswrapper[5118]: I1208 17:58:37.507800 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-d956b4648-jwkwn" event={"ID":"8ecda967-3335-4158-839b-9b4048b8f049","Type":"ContainerDied","Data":"31feeea8562884312474580d6cb82af6829f58ad5da6bd017e3939a9787a0721"} Dec 08 17:58:37 crc kubenswrapper[5118]: I1208 17:58:37.508249 5118 scope.go:117] "RemoveContainer" containerID="31feeea8562884312474580d6cb82af6829f58ad5da6bd017e3939a9787a0721" Dec 08 17:58:37 crc kubenswrapper[5118]: E1208 17:58:37.508472 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-coll-event-smartgateway-d956b4648-jwkwn_service-telemetry(8ecda967-3335-4158-839b-9b4048b8f049)\"" pod="service-telemetry/default-cloud1-coll-event-smartgateway-d956b4648-jwkwn" podUID="8ecda967-3335-4158-839b-9b4048b8f049" Dec 08 17:58:37 crc kubenswrapper[5118]: I1208 17:58:37.513810 5118 generic.go:358] "Generic (PLEG): container finished" podID="0e2a1994-199f-4b38-903b-cba9061dfcad" containerID="ccb2417346bcabce07b84d961e110a73d232ebc2d4fce2978412cecb91e94e38" exitCode=0 Dec 08 17:58:37 crc kubenswrapper[5118]: I1208 17:58:37.513912 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-4zrzx" event={"ID":"0e2a1994-199f-4b38-903b-cba9061dfcad","Type":"ContainerDied","Data":"ccb2417346bcabce07b84d961e110a73d232ebc2d4fce2978412cecb91e94e38"} Dec 08 17:58:37 crc kubenswrapper[5118]: I1208 17:58:37.514432 5118 scope.go:117] "RemoveContainer" containerID="ccb2417346bcabce07b84d961e110a73d232ebc2d4fce2978412cecb91e94e38" Dec 08 17:58:37 crc kubenswrapper[5118]: E1208 17:58:37.514687 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-coll-meter-smartgateway-787645d794-4zrzx_service-telemetry(0e2a1994-199f-4b38-903b-cba9061dfcad)\"" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-4zrzx" podUID="0e2a1994-199f-4b38-903b-cba9061dfcad" Dec 08 17:58:37 crc kubenswrapper[5118]: I1208 17:58:37.522710 5118 generic.go:358] "Generic (PLEG): container finished" podID="f486b0de-c62f-46a2-8649-dca61a92506c" containerID="22081593e01430234cfb5dbf7c618890033249b7e77d517269dcbe6686c62ed4" exitCode=0 Dec 08 17:58:37 crc kubenswrapper[5118]: I1208 17:58:37.522857 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-gh2mp" event={"ID":"f486b0de-c62f-46a2-8649-dca61a92506c","Type":"ContainerDied","Data":"22081593e01430234cfb5dbf7c618890033249b7e77d517269dcbe6686c62ed4"} Dec 08 17:58:37 crc kubenswrapper[5118]: I1208 17:58:37.523280 5118 scope.go:117] "RemoveContainer" containerID="22081593e01430234cfb5dbf7c618890033249b7e77d517269dcbe6686c62ed4" Dec 08 17:58:37 crc kubenswrapper[5118]: E1208 17:58:37.528129 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-sens-meter-smartgateway-66d5b7c5fc-gh2mp_service-telemetry(f486b0de-c62f-46a2-8649-dca61a92506c)\"" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-gh2mp" podUID="f486b0de-c62f-46a2-8649-dca61a92506c" Dec 08 17:58:37 crc kubenswrapper[5118]: I1208 17:58:37.545292 5118 scope.go:117] "RemoveContainer" containerID="3e66f251fbfe418249b98effecd5995f8457755cc0cca38a78a65ada1af475d1" Dec 08 17:58:37 crc kubenswrapper[5118]: I1208 17:58:37.571431 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-65cf5f4bb8-q2xqk" podStartSLOduration=13.913354025 podStartE2EDuration="16.571417134s" podCreationTimestamp="2025-12-08 17:58:21 +0000 UTC" firstStartedPulling="2025-12-08 17:58:34.302717092 +0000 UTC m=+971.204041176" lastFinishedPulling="2025-12-08 17:58:36.960780151 +0000 UTC m=+973.862104285" observedRunningTime="2025-12-08 17:58:37.569624247 +0000 UTC m=+974.470948331" watchObservedRunningTime="2025-12-08 17:58:37.571417134 +0000 UTC m=+974.472741228" Dec 08 17:58:37 crc kubenswrapper[5118]: I1208 17:58:37.596185 5118 scope.go:117] "RemoveContainer" containerID="0e9793d14f55d5b15e4d2dbe096215b1d137841ce8dbbb4a87467089f60758e4" Dec 08 17:58:37 crc kubenswrapper[5118]: I1208 17:58:37.652048 5118 scope.go:117] "RemoveContainer" containerID="e87934f74a36739e9fb6b443d1666824f83d307cdd517f3b3112c8838ef6b768" Dec 08 17:58:38 crc kubenswrapper[5118]: I1208 17:58:38.535380 5118 scope.go:117] "RemoveContainer" containerID="ccb2417346bcabce07b84d961e110a73d232ebc2d4fce2978412cecb91e94e38" Dec 08 17:58:38 crc kubenswrapper[5118]: E1208 17:58:38.535708 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-coll-meter-smartgateway-787645d794-4zrzx_service-telemetry(0e2a1994-199f-4b38-903b-cba9061dfcad)\"" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-4zrzx" podUID="0e2a1994-199f-4b38-903b-cba9061dfcad" Dec 08 17:58:38 crc kubenswrapper[5118]: I1208 17:58:38.539337 5118 scope.go:117] "RemoveContainer" containerID="22081593e01430234cfb5dbf7c618890033249b7e77d517269dcbe6686c62ed4" Dec 08 17:58:38 crc kubenswrapper[5118]: E1208 17:58:38.539614 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-sens-meter-smartgateway-66d5b7c5fc-gh2mp_service-telemetry(f486b0de-c62f-46a2-8649-dca61a92506c)\"" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-gh2mp" podUID="f486b0de-c62f-46a2-8649-dca61a92506c" Dec 08 17:58:38 crc kubenswrapper[5118]: I1208 17:58:38.542809 5118 scope.go:117] "RemoveContainer" containerID="4aa1569377aee5369fcf43349de769f0f5fad3e99ebbedfb73df81ee6b8c54f6" Dec 08 17:58:38 crc kubenswrapper[5118]: E1208 17:58:38.542990 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-ceil-meter-smartgateway-545b564d9f-kf59v_service-telemetry(ef58ecee-c967-4d4f-946b-8c8123a73084)\"" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-kf59v" podUID="ef58ecee-c967-4d4f-946b-8c8123a73084" Dec 08 17:58:38 crc kubenswrapper[5118]: I1208 17:58:38.545704 5118 generic.go:358] "Generic (PLEG): container finished" podID="35c3d7e4-3ad4-4184-a22e-86654ad7867b" containerID="7c20eecf4a107ff956d250900c9a4c7fdcedf6cb6ad2e39b1a081f57fd9d46ee" exitCode=0 Dec 08 17:58:38 crc kubenswrapper[5118]: I1208 17:58:38.545837 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-65cf5f4bb8-q2xqk" event={"ID":"35c3d7e4-3ad4-4184-a22e-86654ad7867b","Type":"ContainerDied","Data":"7c20eecf4a107ff956d250900c9a4c7fdcedf6cb6ad2e39b1a081f57fd9d46ee"} Dec 08 17:58:38 crc kubenswrapper[5118]: I1208 17:58:38.545931 5118 scope.go:117] "RemoveContainer" containerID="fbab11af1b00ff69ef149b7b273830c82b832c32137a73fe9dd8db6daa1ed8c3" Dec 08 17:58:38 crc kubenswrapper[5118]: I1208 17:58:38.546264 5118 scope.go:117] "RemoveContainer" containerID="7c20eecf4a107ff956d250900c9a4c7fdcedf6cb6ad2e39b1a081f57fd9d46ee" Dec 08 17:58:38 crc kubenswrapper[5118]: E1208 17:58:38.546530 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-ceil-event-smartgateway-65cf5f4bb8-q2xqk_service-telemetry(35c3d7e4-3ad4-4184-a22e-86654ad7867b)\"" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-65cf5f4bb8-q2xqk" podUID="35c3d7e4-3ad4-4184-a22e-86654ad7867b" Dec 08 17:58:38 crc kubenswrapper[5118]: I1208 17:58:38.548291 5118 scope.go:117] "RemoveContainer" containerID="31feeea8562884312474580d6cb82af6829f58ad5da6bd017e3939a9787a0721" Dec 08 17:58:38 crc kubenswrapper[5118]: E1208 17:58:38.548440 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-coll-event-smartgateway-d956b4648-jwkwn_service-telemetry(8ecda967-3335-4158-839b-9b4048b8f049)\"" pod="service-telemetry/default-cloud1-coll-event-smartgateway-d956b4648-jwkwn" podUID="8ecda967-3335-4158-839b-9b4048b8f049" Dec 08 17:58:39 crc kubenswrapper[5118]: I1208 17:58:39.557747 5118 scope.go:117] "RemoveContainer" containerID="7c20eecf4a107ff956d250900c9a4c7fdcedf6cb6ad2e39b1a081f57fd9d46ee" Dec 08 17:58:39 crc kubenswrapper[5118]: E1208 17:58:39.558384 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-ceil-event-smartgateway-65cf5f4bb8-q2xqk_service-telemetry(35c3d7e4-3ad4-4184-a22e-86654ad7867b)\"" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-65cf5f4bb8-q2xqk" podUID="35c3d7e4-3ad4-4184-a22e-86654ad7867b" Dec 08 17:58:49 crc kubenswrapper[5118]: I1208 17:58:49.427683 5118 scope.go:117] "RemoveContainer" containerID="ccb2417346bcabce07b84d961e110a73d232ebc2d4fce2978412cecb91e94e38" Dec 08 17:58:49 crc kubenswrapper[5118]: I1208 17:58:49.428757 5118 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 08 17:58:50 crc kubenswrapper[5118]: I1208 17:58:50.427771 5118 scope.go:117] "RemoveContainer" containerID="4aa1569377aee5369fcf43349de769f0f5fad3e99ebbedfb73df81ee6b8c54f6" Dec 08 17:58:50 crc kubenswrapper[5118]: I1208 17:58:50.633271 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-4zrzx" event={"ID":"0e2a1994-199f-4b38-903b-cba9061dfcad","Type":"ContainerStarted","Data":"34f95e7ed1db53f5b91cb626986d526a33e7174895dadbad5b4413da20e01b34"} Dec 08 17:58:51 crc kubenswrapper[5118]: I1208 17:58:51.641957 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-kf59v" event={"ID":"ef58ecee-c967-4d4f-946b-8c8123a73084","Type":"ContainerStarted","Data":"93e1626c71822f2aa5375e8f2926ab72e61f3eea1f82c2156c274c2af81adc13"} Dec 08 17:58:52 crc kubenswrapper[5118]: I1208 17:58:52.427261 5118 scope.go:117] "RemoveContainer" containerID="31feeea8562884312474580d6cb82af6829f58ad5da6bd017e3939a9787a0721" Dec 08 17:58:53 crc kubenswrapper[5118]: I1208 17:58:53.434210 5118 scope.go:117] "RemoveContainer" containerID="22081593e01430234cfb5dbf7c618890033249b7e77d517269dcbe6686c62ed4" Dec 08 17:58:53 crc kubenswrapper[5118]: I1208 17:58:53.655229 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-d956b4648-jwkwn" event={"ID":"8ecda967-3335-4158-839b-9b4048b8f049","Type":"ContainerStarted","Data":"fc718c9ed77354160036f95cf499ca1e883fd2d0775d91649095085d642e4d8d"} Dec 08 17:58:54 crc kubenswrapper[5118]: I1208 17:58:54.427988 5118 scope.go:117] "RemoveContainer" containerID="7c20eecf4a107ff956d250900c9a4c7fdcedf6cb6ad2e39b1a081f57fd9d46ee" Dec 08 17:58:54 crc kubenswrapper[5118]: I1208 17:58:54.670805 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-gh2mp" event={"ID":"f486b0de-c62f-46a2-8649-dca61a92506c","Type":"ContainerStarted","Data":"7aecb59e0b625816d2d03f468469d78fc98b90b9fb9fcc62c5181c80da4dd465"} Dec 08 17:58:55 crc kubenswrapper[5118]: I1208 17:58:55.681913 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-65cf5f4bb8-q2xqk" event={"ID":"35c3d7e4-3ad4-4184-a22e-86654ad7867b","Type":"ContainerStarted","Data":"02776249f57ab371a5a6aab418d5768f7763e3f5df82b007a5de74a7b3f43508"} Dec 08 17:59:05 crc kubenswrapper[5118]: I1208 17:59:05.516965 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/qdr-test"] Dec 08 17:59:05 crc kubenswrapper[5118]: I1208 17:59:05.999142 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/qdr-test"] Dec 08 17:59:05 crc kubenswrapper[5118]: I1208 17:59:05.999366 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/qdr-test" Dec 08 17:59:06 crc kubenswrapper[5118]: I1208 17:59:06.002231 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"qdr-test-config\"" Dec 08 17:59:06 crc kubenswrapper[5118]: I1208 17:59:06.002571 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-selfsigned\"" Dec 08 17:59:06 crc kubenswrapper[5118]: I1208 17:59:06.117847 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vr9f\" (UniqueName: \"kubernetes.io/projected/73a290f7-fdfb-4484-9e5f-e3f80b72dec3-kube-api-access-6vr9f\") pod \"qdr-test\" (UID: \"73a290f7-fdfb-4484-9e5f-e3f80b72dec3\") " pod="service-telemetry/qdr-test" Dec 08 17:59:06 crc kubenswrapper[5118]: I1208 17:59:06.118277 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-selfsigned-cert\" (UniqueName: \"kubernetes.io/secret/73a290f7-fdfb-4484-9e5f-e3f80b72dec3-default-interconnect-selfsigned-cert\") pod \"qdr-test\" (UID: \"73a290f7-fdfb-4484-9e5f-e3f80b72dec3\") " pod="service-telemetry/qdr-test" Dec 08 17:59:06 crc kubenswrapper[5118]: I1208 17:59:06.118380 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"qdr-test-config\" (UniqueName: \"kubernetes.io/configmap/73a290f7-fdfb-4484-9e5f-e3f80b72dec3-qdr-test-config\") pod \"qdr-test\" (UID: \"73a290f7-fdfb-4484-9e5f-e3f80b72dec3\") " pod="service-telemetry/qdr-test" Dec 08 17:59:06 crc kubenswrapper[5118]: I1208 17:59:06.219925 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-selfsigned-cert\" (UniqueName: \"kubernetes.io/secret/73a290f7-fdfb-4484-9e5f-e3f80b72dec3-default-interconnect-selfsigned-cert\") pod \"qdr-test\" (UID: \"73a290f7-fdfb-4484-9e5f-e3f80b72dec3\") " pod="service-telemetry/qdr-test" Dec 08 17:59:06 crc kubenswrapper[5118]: I1208 17:59:06.220129 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"qdr-test-config\" (UniqueName: \"kubernetes.io/configmap/73a290f7-fdfb-4484-9e5f-e3f80b72dec3-qdr-test-config\") pod \"qdr-test\" (UID: \"73a290f7-fdfb-4484-9e5f-e3f80b72dec3\") " pod="service-telemetry/qdr-test" Dec 08 17:59:06 crc kubenswrapper[5118]: I1208 17:59:06.220180 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6vr9f\" (UniqueName: \"kubernetes.io/projected/73a290f7-fdfb-4484-9e5f-e3f80b72dec3-kube-api-access-6vr9f\") pod \"qdr-test\" (UID: \"73a290f7-fdfb-4484-9e5f-e3f80b72dec3\") " pod="service-telemetry/qdr-test" Dec 08 17:59:06 crc kubenswrapper[5118]: I1208 17:59:06.220962 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"qdr-test-config\" (UniqueName: \"kubernetes.io/configmap/73a290f7-fdfb-4484-9e5f-e3f80b72dec3-qdr-test-config\") pod \"qdr-test\" (UID: \"73a290f7-fdfb-4484-9e5f-e3f80b72dec3\") " pod="service-telemetry/qdr-test" Dec 08 17:59:06 crc kubenswrapper[5118]: I1208 17:59:06.227320 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-selfsigned-cert\" (UniqueName: \"kubernetes.io/secret/73a290f7-fdfb-4484-9e5f-e3f80b72dec3-default-interconnect-selfsigned-cert\") pod \"qdr-test\" (UID: \"73a290f7-fdfb-4484-9e5f-e3f80b72dec3\") " pod="service-telemetry/qdr-test" Dec 08 17:59:06 crc kubenswrapper[5118]: I1208 17:59:06.248163 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vr9f\" (UniqueName: \"kubernetes.io/projected/73a290f7-fdfb-4484-9e5f-e3f80b72dec3-kube-api-access-6vr9f\") pod \"qdr-test\" (UID: \"73a290f7-fdfb-4484-9e5f-e3f80b72dec3\") " pod="service-telemetry/qdr-test" Dec 08 17:59:06 crc kubenswrapper[5118]: I1208 17:59:06.321335 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/qdr-test" Dec 08 17:59:06 crc kubenswrapper[5118]: I1208 17:59:06.841671 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/qdr-test"] Dec 08 17:59:06 crc kubenswrapper[5118]: W1208 17:59:06.846561 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod73a290f7_fdfb_4484_9e5f_e3f80b72dec3.slice/crio-00e4072bf714a82fa7a0613060eb288b2efa56b76d893e0bc34338bdce1c1591 WatchSource:0}: Error finding container 00e4072bf714a82fa7a0613060eb288b2efa56b76d893e0bc34338bdce1c1591: Status 404 returned error can't find the container with id 00e4072bf714a82fa7a0613060eb288b2efa56b76d893e0bc34338bdce1c1591 Dec 08 17:59:07 crc kubenswrapper[5118]: I1208 17:59:07.767412 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/qdr-test" event={"ID":"73a290f7-fdfb-4484-9e5f-e3f80b72dec3","Type":"ContainerStarted","Data":"00e4072bf714a82fa7a0613060eb288b2efa56b76d893e0bc34338bdce1c1591"} Dec 08 17:59:10 crc kubenswrapper[5118]: E1208 17:59:10.343230 5118 certificate_manager.go:613] "Certificate request was not signed" err="timed out waiting for the condition" logger="kubernetes.io/kubelet-serving.UnhandledError" Dec 08 17:59:12 crc kubenswrapper[5118]: I1208 17:59:12.445148 5118 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kubelet-serving" Dec 08 17:59:12 crc kubenswrapper[5118]: I1208 17:59:12.453489 5118 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Dec 08 17:59:12 crc kubenswrapper[5118]: I1208 17:59:12.467191 5118 ???:1] "http: TLS handshake error from 192.168.126.11:50680: no serving certificate available for the kubelet" Dec 08 17:59:12 crc kubenswrapper[5118]: I1208 17:59:12.491305 5118 ???:1] "http: TLS handshake error from 192.168.126.11:50694: no serving certificate available for the kubelet" Dec 08 17:59:12 crc kubenswrapper[5118]: I1208 17:59:12.539555 5118 ???:1] "http: TLS handshake error from 192.168.126.11:50704: no serving certificate available for the kubelet" Dec 08 17:59:12 crc kubenswrapper[5118]: I1208 17:59:12.582399 5118 ???:1] "http: TLS handshake error from 192.168.126.11:50714: no serving certificate available for the kubelet" Dec 08 17:59:12 crc kubenswrapper[5118]: I1208 17:59:12.643395 5118 ???:1] "http: TLS handshake error from 192.168.126.11:50718: no serving certificate available for the kubelet" Dec 08 17:59:12 crc kubenswrapper[5118]: I1208 17:59:12.748120 5118 ???:1] "http: TLS handshake error from 192.168.126.11:50722: no serving certificate available for the kubelet" Dec 08 17:59:12 crc kubenswrapper[5118]: I1208 17:59:12.943653 5118 ???:1] "http: TLS handshake error from 192.168.126.11:50736: no serving certificate available for the kubelet" Dec 08 17:59:13 crc kubenswrapper[5118]: I1208 17:59:13.293457 5118 ???:1] "http: TLS handshake error from 192.168.126.11:50752: no serving certificate available for the kubelet" Dec 08 17:59:13 crc kubenswrapper[5118]: I1208 17:59:13.811630 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/qdr-test" event={"ID":"73a290f7-fdfb-4484-9e5f-e3f80b72dec3","Type":"ContainerStarted","Data":"4ebfce0c843489025b02d48ca209aeac66c8e0702d0a2aa2fa4cac00352330d3"} Dec 08 17:59:13 crc kubenswrapper[5118]: I1208 17:59:13.846973 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/qdr-test" podStartSLOduration=2.820259695 podStartE2EDuration="8.846949842s" podCreationTimestamp="2025-12-08 17:59:05 +0000 UTC" firstStartedPulling="2025-12-08 17:59:06.849772497 +0000 UTC m=+1003.751096621" lastFinishedPulling="2025-12-08 17:59:12.876462674 +0000 UTC m=+1009.777786768" observedRunningTime="2025-12-08 17:59:13.839851244 +0000 UTC m=+1010.741175348" watchObservedRunningTime="2025-12-08 17:59:13.846949842 +0000 UTC m=+1010.748273946" Dec 08 17:59:13 crc kubenswrapper[5118]: I1208 17:59:13.956643 5118 ???:1] "http: TLS handshake error from 192.168.126.11:50762: no serving certificate available for the kubelet" Dec 08 17:59:14 crc kubenswrapper[5118]: I1208 17:59:14.164380 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/stf-smoketest-smoke1-pbhxq"] Dec 08 17:59:14 crc kubenswrapper[5118]: I1208 17:59:14.174578 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-pbhxq" Dec 08 17:59:14 crc kubenswrapper[5118]: I1208 17:59:14.178441 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-ceilometer-entrypoint-script\"" Dec 08 17:59:14 crc kubenswrapper[5118]: I1208 17:59:14.179034 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-sensubility-config\"" Dec 08 17:59:14 crc kubenswrapper[5118]: I1208 17:59:14.179068 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-healthcheck-log\"" Dec 08 17:59:14 crc kubenswrapper[5118]: I1208 17:59:14.179515 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-collectd-entrypoint-script\"" Dec 08 17:59:14 crc kubenswrapper[5118]: I1208 17:59:14.179719 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-ceilometer-publisher\"" Dec 08 17:59:14 crc kubenswrapper[5118]: I1208 17:59:14.189042 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-collectd-config\"" Dec 08 17:59:14 crc kubenswrapper[5118]: I1208 17:59:14.190998 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/stf-smoketest-smoke1-pbhxq"] Dec 08 17:59:14 crc kubenswrapper[5118]: I1208 17:59:14.340671 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/612790c4-c2da-4318-89f8-c7745da26ece-ceilometer-publisher\") pod \"stf-smoketest-smoke1-pbhxq\" (UID: \"612790c4-c2da-4318-89f8-c7745da26ece\") " pod="service-telemetry/stf-smoketest-smoke1-pbhxq" Dec 08 17:59:14 crc kubenswrapper[5118]: I1208 17:59:14.340826 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/612790c4-c2da-4318-89f8-c7745da26ece-sensubility-config\") pod \"stf-smoketest-smoke1-pbhxq\" (UID: \"612790c4-c2da-4318-89f8-c7745da26ece\") " pod="service-telemetry/stf-smoketest-smoke1-pbhxq" Dec 08 17:59:14 crc kubenswrapper[5118]: I1208 17:59:14.341058 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/612790c4-c2da-4318-89f8-c7745da26ece-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-pbhxq\" (UID: \"612790c4-c2da-4318-89f8-c7745da26ece\") " pod="service-telemetry/stf-smoketest-smoke1-pbhxq" Dec 08 17:59:14 crc kubenswrapper[5118]: I1208 17:59:14.341203 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/612790c4-c2da-4318-89f8-c7745da26ece-healthcheck-log\") pod \"stf-smoketest-smoke1-pbhxq\" (UID: \"612790c4-c2da-4318-89f8-c7745da26ece\") " pod="service-telemetry/stf-smoketest-smoke1-pbhxq" Dec 08 17:59:14 crc kubenswrapper[5118]: I1208 17:59:14.341333 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xng2\" (UniqueName: \"kubernetes.io/projected/612790c4-c2da-4318-89f8-c7745da26ece-kube-api-access-9xng2\") pod \"stf-smoketest-smoke1-pbhxq\" (UID: \"612790c4-c2da-4318-89f8-c7745da26ece\") " pod="service-telemetry/stf-smoketest-smoke1-pbhxq" Dec 08 17:59:14 crc kubenswrapper[5118]: I1208 17:59:14.341436 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/612790c4-c2da-4318-89f8-c7745da26ece-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-pbhxq\" (UID: \"612790c4-c2da-4318-89f8-c7745da26ece\") " pod="service-telemetry/stf-smoketest-smoke1-pbhxq" Dec 08 17:59:14 crc kubenswrapper[5118]: I1208 17:59:14.341594 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/612790c4-c2da-4318-89f8-c7745da26ece-collectd-config\") pod \"stf-smoketest-smoke1-pbhxq\" (UID: \"612790c4-c2da-4318-89f8-c7745da26ece\") " pod="service-telemetry/stf-smoketest-smoke1-pbhxq" Dec 08 17:59:14 crc kubenswrapper[5118]: I1208 17:59:14.443473 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/612790c4-c2da-4318-89f8-c7745da26ece-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-pbhxq\" (UID: \"612790c4-c2da-4318-89f8-c7745da26ece\") " pod="service-telemetry/stf-smoketest-smoke1-pbhxq" Dec 08 17:59:14 crc kubenswrapper[5118]: I1208 17:59:14.443598 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/612790c4-c2da-4318-89f8-c7745da26ece-healthcheck-log\") pod \"stf-smoketest-smoke1-pbhxq\" (UID: \"612790c4-c2da-4318-89f8-c7745da26ece\") " pod="service-telemetry/stf-smoketest-smoke1-pbhxq" Dec 08 17:59:14 crc kubenswrapper[5118]: I1208 17:59:14.443687 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9xng2\" (UniqueName: \"kubernetes.io/projected/612790c4-c2da-4318-89f8-c7745da26ece-kube-api-access-9xng2\") pod \"stf-smoketest-smoke1-pbhxq\" (UID: \"612790c4-c2da-4318-89f8-c7745da26ece\") " pod="service-telemetry/stf-smoketest-smoke1-pbhxq" Dec 08 17:59:14 crc kubenswrapper[5118]: I1208 17:59:14.443761 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/612790c4-c2da-4318-89f8-c7745da26ece-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-pbhxq\" (UID: \"612790c4-c2da-4318-89f8-c7745da26ece\") " pod="service-telemetry/stf-smoketest-smoke1-pbhxq" Dec 08 17:59:14 crc kubenswrapper[5118]: I1208 17:59:14.443963 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/612790c4-c2da-4318-89f8-c7745da26ece-collectd-config\") pod \"stf-smoketest-smoke1-pbhxq\" (UID: \"612790c4-c2da-4318-89f8-c7745da26ece\") " pod="service-telemetry/stf-smoketest-smoke1-pbhxq" Dec 08 17:59:14 crc kubenswrapper[5118]: I1208 17:59:14.444025 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/612790c4-c2da-4318-89f8-c7745da26ece-ceilometer-publisher\") pod \"stf-smoketest-smoke1-pbhxq\" (UID: \"612790c4-c2da-4318-89f8-c7745da26ece\") " pod="service-telemetry/stf-smoketest-smoke1-pbhxq" Dec 08 17:59:14 crc kubenswrapper[5118]: I1208 17:59:14.446337 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/612790c4-c2da-4318-89f8-c7745da26ece-ceilometer-publisher\") pod \"stf-smoketest-smoke1-pbhxq\" (UID: \"612790c4-c2da-4318-89f8-c7745da26ece\") " pod="service-telemetry/stf-smoketest-smoke1-pbhxq" Dec 08 17:59:14 crc kubenswrapper[5118]: I1208 17:59:14.444264 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/612790c4-c2da-4318-89f8-c7745da26ece-sensubility-config\") pod \"stf-smoketest-smoke1-pbhxq\" (UID: \"612790c4-c2da-4318-89f8-c7745da26ece\") " pod="service-telemetry/stf-smoketest-smoke1-pbhxq" Dec 08 17:59:14 crc kubenswrapper[5118]: I1208 17:59:14.446788 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/612790c4-c2da-4318-89f8-c7745da26ece-collectd-config\") pod \"stf-smoketest-smoke1-pbhxq\" (UID: \"612790c4-c2da-4318-89f8-c7745da26ece\") " pod="service-telemetry/stf-smoketest-smoke1-pbhxq" Dec 08 17:59:14 crc kubenswrapper[5118]: I1208 17:59:14.446873 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/612790c4-c2da-4318-89f8-c7745da26ece-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-pbhxq\" (UID: \"612790c4-c2da-4318-89f8-c7745da26ece\") " pod="service-telemetry/stf-smoketest-smoke1-pbhxq" Dec 08 17:59:14 crc kubenswrapper[5118]: I1208 17:59:14.447985 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/612790c4-c2da-4318-89f8-c7745da26ece-healthcheck-log\") pod \"stf-smoketest-smoke1-pbhxq\" (UID: \"612790c4-c2da-4318-89f8-c7745da26ece\") " pod="service-telemetry/stf-smoketest-smoke1-pbhxq" Dec 08 17:59:14 crc kubenswrapper[5118]: I1208 17:59:14.448725 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/612790c4-c2da-4318-89f8-c7745da26ece-sensubility-config\") pod \"stf-smoketest-smoke1-pbhxq\" (UID: \"612790c4-c2da-4318-89f8-c7745da26ece\") " pod="service-telemetry/stf-smoketest-smoke1-pbhxq" Dec 08 17:59:14 crc kubenswrapper[5118]: I1208 17:59:14.449276 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/612790c4-c2da-4318-89f8-c7745da26ece-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-pbhxq\" (UID: \"612790c4-c2da-4318-89f8-c7745da26ece\") " pod="service-telemetry/stf-smoketest-smoke1-pbhxq" Dec 08 17:59:14 crc kubenswrapper[5118]: I1208 17:59:14.472474 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xng2\" (UniqueName: \"kubernetes.io/projected/612790c4-c2da-4318-89f8-c7745da26ece-kube-api-access-9xng2\") pod \"stf-smoketest-smoke1-pbhxq\" (UID: \"612790c4-c2da-4318-89f8-c7745da26ece\") " pod="service-telemetry/stf-smoketest-smoke1-pbhxq" Dec 08 17:59:14 crc kubenswrapper[5118]: I1208 17:59:14.503464 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-pbhxq" Dec 08 17:59:14 crc kubenswrapper[5118]: I1208 17:59:14.592253 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/curl"] Dec 08 17:59:14 crc kubenswrapper[5118]: I1208 17:59:14.604128 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Dec 08 17:59:14 crc kubenswrapper[5118]: I1208 17:59:14.606510 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/curl"] Dec 08 17:59:14 crc kubenswrapper[5118]: I1208 17:59:14.752995 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rv8s\" (UniqueName: \"kubernetes.io/projected/f1d063fa-3d6b-49c3-aa66-288dd70351b0-kube-api-access-5rv8s\") pod \"curl\" (UID: \"f1d063fa-3d6b-49c3-aa66-288dd70351b0\") " pod="service-telemetry/curl" Dec 08 17:59:14 crc kubenswrapper[5118]: I1208 17:59:14.766490 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/stf-smoketest-smoke1-pbhxq"] Dec 08 17:59:14 crc kubenswrapper[5118]: W1208 17:59:14.774952 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod612790c4_c2da_4318_89f8_c7745da26ece.slice/crio-d780467c91a7b84e66a1c6b108d5d66e85dec21a5e28ea3e2bcd71f25b565354 WatchSource:0}: Error finding container d780467c91a7b84e66a1c6b108d5d66e85dec21a5e28ea3e2bcd71f25b565354: Status 404 returned error can't find the container with id d780467c91a7b84e66a1c6b108d5d66e85dec21a5e28ea3e2bcd71f25b565354 Dec 08 17:59:14 crc kubenswrapper[5118]: I1208 17:59:14.819276 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-pbhxq" event={"ID":"612790c4-c2da-4318-89f8-c7745da26ece","Type":"ContainerStarted","Data":"d780467c91a7b84e66a1c6b108d5d66e85dec21a5e28ea3e2bcd71f25b565354"} Dec 08 17:59:14 crc kubenswrapper[5118]: I1208 17:59:14.854948 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5rv8s\" (UniqueName: \"kubernetes.io/projected/f1d063fa-3d6b-49c3-aa66-288dd70351b0-kube-api-access-5rv8s\") pod \"curl\" (UID: \"f1d063fa-3d6b-49c3-aa66-288dd70351b0\") " pod="service-telemetry/curl" Dec 08 17:59:14 crc kubenswrapper[5118]: I1208 17:59:14.875248 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rv8s\" (UniqueName: \"kubernetes.io/projected/f1d063fa-3d6b-49c3-aa66-288dd70351b0-kube-api-access-5rv8s\") pod \"curl\" (UID: \"f1d063fa-3d6b-49c3-aa66-288dd70351b0\") " pod="service-telemetry/curl" Dec 08 17:59:14 crc kubenswrapper[5118]: I1208 17:59:14.937980 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Dec 08 17:59:15 crc kubenswrapper[5118]: I1208 17:59:15.266973 5118 ???:1] "http: TLS handshake error from 192.168.126.11:50778: no serving certificate available for the kubelet" Dec 08 17:59:15 crc kubenswrapper[5118]: I1208 17:59:15.288024 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/curl"] Dec 08 17:59:15 crc kubenswrapper[5118]: W1208 17:59:15.295843 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf1d063fa_3d6b_49c3_aa66_288dd70351b0.slice/crio-b472bcb7f848f750e3d98d3c87b7fe31a8c888618d7990843018d1b071acd1c2 WatchSource:0}: Error finding container b472bcb7f848f750e3d98d3c87b7fe31a8c888618d7990843018d1b071acd1c2: Status 404 returned error can't find the container with id b472bcb7f848f750e3d98d3c87b7fe31a8c888618d7990843018d1b071acd1c2 Dec 08 17:59:15 crc kubenswrapper[5118]: I1208 17:59:15.838385 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"f1d063fa-3d6b-49c3-aa66-288dd70351b0","Type":"ContainerStarted","Data":"b472bcb7f848f750e3d98d3c87b7fe31a8c888618d7990843018d1b071acd1c2"} Dec 08 17:59:17 crc kubenswrapper[5118]: I1208 17:59:17.845465 5118 ???:1] "http: TLS handshake error from 192.168.126.11:37252: no serving certificate available for the kubelet" Dec 08 17:59:17 crc kubenswrapper[5118]: I1208 17:59:17.853939 5118 generic.go:358] "Generic (PLEG): container finished" podID="f1d063fa-3d6b-49c3-aa66-288dd70351b0" containerID="a7928a57f2a8dbff9fceeb51195188b3a2cd0237b3fbfeaf0fd5213020d1106a" exitCode=0 Dec 08 17:59:17 crc kubenswrapper[5118]: I1208 17:59:17.854123 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"f1d063fa-3d6b-49c3-aa66-288dd70351b0","Type":"ContainerDied","Data":"a7928a57f2a8dbff9fceeb51195188b3a2cd0237b3fbfeaf0fd5213020d1106a"} Dec 08 17:59:22 crc kubenswrapper[5118]: I1208 17:59:22.998121 5118 ???:1] "http: TLS handshake error from 192.168.126.11:37266: no serving certificate available for the kubelet" Dec 08 17:59:24 crc kubenswrapper[5118]: I1208 17:59:24.286775 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Dec 08 17:59:24 crc kubenswrapper[5118]: I1208 17:59:24.410147 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5rv8s\" (UniqueName: \"kubernetes.io/projected/f1d063fa-3d6b-49c3-aa66-288dd70351b0-kube-api-access-5rv8s\") pod \"f1d063fa-3d6b-49c3-aa66-288dd70351b0\" (UID: \"f1d063fa-3d6b-49c3-aa66-288dd70351b0\") " Dec 08 17:59:24 crc kubenswrapper[5118]: I1208 17:59:24.414443 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1d063fa-3d6b-49c3-aa66-288dd70351b0-kube-api-access-5rv8s" (OuterVolumeSpecName: "kube-api-access-5rv8s") pod "f1d063fa-3d6b-49c3-aa66-288dd70351b0" (UID: "f1d063fa-3d6b-49c3-aa66-288dd70351b0"). InnerVolumeSpecName "kube-api-access-5rv8s". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:59:24 crc kubenswrapper[5118]: I1208 17:59:24.479781 5118 ???:1] "http: TLS handshake error from 192.168.126.11:37270: no serving certificate available for the kubelet" Dec 08 17:59:24 crc kubenswrapper[5118]: I1208 17:59:24.511998 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5rv8s\" (UniqueName: \"kubernetes.io/projected/f1d063fa-3d6b-49c3-aa66-288dd70351b0-kube-api-access-5rv8s\") on node \"crc\" DevicePath \"\"" Dec 08 17:59:24 crc kubenswrapper[5118]: I1208 17:59:24.770579 5118 ???:1] "http: TLS handshake error from 192.168.126.11:37282: no serving certificate available for the kubelet" Dec 08 17:59:24 crc kubenswrapper[5118]: I1208 17:59:24.905762 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-pbhxq" event={"ID":"612790c4-c2da-4318-89f8-c7745da26ece","Type":"ContainerStarted","Data":"49a2d94e35dff7bd89d4adda86a9bb2a1c75c043a28e9dd9185f028d19dcc6a8"} Dec 08 17:59:24 crc kubenswrapper[5118]: I1208 17:59:24.907514 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"f1d063fa-3d6b-49c3-aa66-288dd70351b0","Type":"ContainerDied","Data":"b472bcb7f848f750e3d98d3c87b7fe31a8c888618d7990843018d1b071acd1c2"} Dec 08 17:59:24 crc kubenswrapper[5118]: I1208 17:59:24.907546 5118 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b472bcb7f848f750e3d98d3c87b7fe31a8c888618d7990843018d1b071acd1c2" Dec 08 17:59:24 crc kubenswrapper[5118]: I1208 17:59:24.907512 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Dec 08 17:59:30 crc kubenswrapper[5118]: I1208 17:59:30.948541 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-pbhxq" event={"ID":"612790c4-c2da-4318-89f8-c7745da26ece","Type":"ContainerStarted","Data":"ae6ee93a5a6d6a767e3bbb450044418d804b86e821540a775c6b84a6df04014f"} Dec 08 17:59:33 crc kubenswrapper[5118]: I1208 17:59:33.262254 5118 ???:1] "http: TLS handshake error from 192.168.126.11:37990: no serving certificate available for the kubelet" Dec 08 17:59:51 crc kubenswrapper[5118]: I1208 17:59:51.687845 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/stf-smoketest-smoke1-pbhxq" podStartSLOduration=22.600332899 podStartE2EDuration="37.687823138s" podCreationTimestamp="2025-12-08 17:59:14 +0000 UTC" firstStartedPulling="2025-12-08 17:59:14.777839036 +0000 UTC m=+1011.679163130" lastFinishedPulling="2025-12-08 17:59:29.865329275 +0000 UTC m=+1026.766653369" observedRunningTime="2025-12-08 17:59:30.988005254 +0000 UTC m=+1027.889329358" watchObservedRunningTime="2025-12-08 17:59:51.687823138 +0000 UTC m=+1048.589147232" Dec 08 17:59:51 crc kubenswrapper[5118]: I1208 17:59:51.691254 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-jlbqc"] Dec 08 17:59:51 crc kubenswrapper[5118]: I1208 17:59:51.692194 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f1d063fa-3d6b-49c3-aa66-288dd70351b0" containerName="curl" Dec 08 17:59:51 crc kubenswrapper[5118]: I1208 17:59:51.692220 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1d063fa-3d6b-49c3-aa66-288dd70351b0" containerName="curl" Dec 08 17:59:51 crc kubenswrapper[5118]: I1208 17:59:51.692392 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="f1d063fa-3d6b-49c3-aa66-288dd70351b0" containerName="curl" Dec 08 17:59:51 crc kubenswrapper[5118]: I1208 17:59:51.704866 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jlbqc" Dec 08 17:59:51 crc kubenswrapper[5118]: I1208 17:59:51.710554 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jlbqc"] Dec 08 17:59:51 crc kubenswrapper[5118]: I1208 17:59:51.841484 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnx7l\" (UniqueName: \"kubernetes.io/projected/5dcfd2a5-06dd-4fc6-ad8f-8979503b1a97-kube-api-access-rnx7l\") pod \"community-operators-jlbqc\" (UID: \"5dcfd2a5-06dd-4fc6-ad8f-8979503b1a97\") " pod="openshift-marketplace/community-operators-jlbqc" Dec 08 17:59:51 crc kubenswrapper[5118]: I1208 17:59:51.841644 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5dcfd2a5-06dd-4fc6-ad8f-8979503b1a97-utilities\") pod \"community-operators-jlbqc\" (UID: \"5dcfd2a5-06dd-4fc6-ad8f-8979503b1a97\") " pod="openshift-marketplace/community-operators-jlbqc" Dec 08 17:59:51 crc kubenswrapper[5118]: I1208 17:59:51.841679 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5dcfd2a5-06dd-4fc6-ad8f-8979503b1a97-catalog-content\") pod \"community-operators-jlbqc\" (UID: \"5dcfd2a5-06dd-4fc6-ad8f-8979503b1a97\") " pod="openshift-marketplace/community-operators-jlbqc" Dec 08 17:59:51 crc kubenswrapper[5118]: I1208 17:59:51.943487 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5dcfd2a5-06dd-4fc6-ad8f-8979503b1a97-utilities\") pod \"community-operators-jlbqc\" (UID: \"5dcfd2a5-06dd-4fc6-ad8f-8979503b1a97\") " pod="openshift-marketplace/community-operators-jlbqc" Dec 08 17:59:51 crc kubenswrapper[5118]: I1208 17:59:51.943537 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5dcfd2a5-06dd-4fc6-ad8f-8979503b1a97-catalog-content\") pod \"community-operators-jlbqc\" (UID: \"5dcfd2a5-06dd-4fc6-ad8f-8979503b1a97\") " pod="openshift-marketplace/community-operators-jlbqc" Dec 08 17:59:51 crc kubenswrapper[5118]: I1208 17:59:51.943641 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rnx7l\" (UniqueName: \"kubernetes.io/projected/5dcfd2a5-06dd-4fc6-ad8f-8979503b1a97-kube-api-access-rnx7l\") pod \"community-operators-jlbqc\" (UID: \"5dcfd2a5-06dd-4fc6-ad8f-8979503b1a97\") " pod="openshift-marketplace/community-operators-jlbqc" Dec 08 17:59:51 crc kubenswrapper[5118]: I1208 17:59:51.944596 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5dcfd2a5-06dd-4fc6-ad8f-8979503b1a97-utilities\") pod \"community-operators-jlbqc\" (UID: \"5dcfd2a5-06dd-4fc6-ad8f-8979503b1a97\") " pod="openshift-marketplace/community-operators-jlbqc" Dec 08 17:59:51 crc kubenswrapper[5118]: I1208 17:59:51.944611 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5dcfd2a5-06dd-4fc6-ad8f-8979503b1a97-catalog-content\") pod \"community-operators-jlbqc\" (UID: \"5dcfd2a5-06dd-4fc6-ad8f-8979503b1a97\") " pod="openshift-marketplace/community-operators-jlbqc" Dec 08 17:59:51 crc kubenswrapper[5118]: I1208 17:59:51.965475 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnx7l\" (UniqueName: \"kubernetes.io/projected/5dcfd2a5-06dd-4fc6-ad8f-8979503b1a97-kube-api-access-rnx7l\") pod \"community-operators-jlbqc\" (UID: \"5dcfd2a5-06dd-4fc6-ad8f-8979503b1a97\") " pod="openshift-marketplace/community-operators-jlbqc" Dec 08 17:59:52 crc kubenswrapper[5118]: I1208 17:59:52.028286 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jlbqc" Dec 08 17:59:52 crc kubenswrapper[5118]: I1208 17:59:52.282742 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jlbqc"] Dec 08 17:59:52 crc kubenswrapper[5118]: W1208 17:59:52.289567 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5dcfd2a5_06dd_4fc6_ad8f_8979503b1a97.slice/crio-f65e8336c15dfa0952bc851e6729d6f5b18e9601bde2f081d6c11bed66bb7f75 WatchSource:0}: Error finding container f65e8336c15dfa0952bc851e6729d6f5b18e9601bde2f081d6c11bed66bb7f75: Status 404 returned error can't find the container with id f65e8336c15dfa0952bc851e6729d6f5b18e9601bde2f081d6c11bed66bb7f75 Dec 08 17:59:53 crc kubenswrapper[5118]: I1208 17:59:53.139121 5118 generic.go:358] "Generic (PLEG): container finished" podID="5dcfd2a5-06dd-4fc6-ad8f-8979503b1a97" containerID="29e59715b0ccab033dd5da542625fc9b1d4fba44114e5f710464480e8f2541f0" exitCode=0 Dec 08 17:59:53 crc kubenswrapper[5118]: I1208 17:59:53.139218 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jlbqc" event={"ID":"5dcfd2a5-06dd-4fc6-ad8f-8979503b1a97","Type":"ContainerDied","Data":"29e59715b0ccab033dd5da542625fc9b1d4fba44114e5f710464480e8f2541f0"} Dec 08 17:59:53 crc kubenswrapper[5118]: I1208 17:59:53.139423 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jlbqc" event={"ID":"5dcfd2a5-06dd-4fc6-ad8f-8979503b1a97","Type":"ContainerStarted","Data":"f65e8336c15dfa0952bc851e6729d6f5b18e9601bde2f081d6c11bed66bb7f75"} Dec 08 17:59:53 crc kubenswrapper[5118]: I1208 17:59:53.771499 5118 ???:1] "http: TLS handshake error from 192.168.126.11:37036: no serving certificate available for the kubelet" Dec 08 17:59:54 crc kubenswrapper[5118]: I1208 17:59:54.147462 5118 generic.go:358] "Generic (PLEG): container finished" podID="5dcfd2a5-06dd-4fc6-ad8f-8979503b1a97" containerID="bd3b31cb90ce672afe81260e5b03c02a8f5558be78dfea0ce9b8bec936061e84" exitCode=0 Dec 08 17:59:54 crc kubenswrapper[5118]: I1208 17:59:54.147524 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jlbqc" event={"ID":"5dcfd2a5-06dd-4fc6-ad8f-8979503b1a97","Type":"ContainerDied","Data":"bd3b31cb90ce672afe81260e5b03c02a8f5558be78dfea0ce9b8bec936061e84"} Dec 08 17:59:54 crc kubenswrapper[5118]: I1208 17:59:54.922590 5118 ???:1] "http: TLS handshake error from 192.168.126.11:37052: no serving certificate available for the kubelet" Dec 08 17:59:55 crc kubenswrapper[5118]: I1208 17:59:55.158478 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jlbqc" event={"ID":"5dcfd2a5-06dd-4fc6-ad8f-8979503b1a97","Type":"ContainerStarted","Data":"50f6a51e07f70b4a49764ac139767c8bdc4ebd5264f6a7c160dcecd9c1e7b4e8"} Dec 08 17:59:55 crc kubenswrapper[5118]: I1208 17:59:55.175095 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-jlbqc" podStartSLOduration=3.616106387 podStartE2EDuration="4.175078646s" podCreationTimestamp="2025-12-08 17:59:51 +0000 UTC" firstStartedPulling="2025-12-08 17:59:53.141291492 +0000 UTC m=+1050.042615586" lastFinishedPulling="2025-12-08 17:59:53.700263741 +0000 UTC m=+1050.601587845" observedRunningTime="2025-12-08 17:59:55.17413474 +0000 UTC m=+1052.075458834" watchObservedRunningTime="2025-12-08 17:59:55.175078646 +0000 UTC m=+1052.076402740" Dec 08 17:59:59 crc kubenswrapper[5118]: I1208 17:59:59.190237 5118 generic.go:358] "Generic (PLEG): container finished" podID="612790c4-c2da-4318-89f8-c7745da26ece" containerID="49a2d94e35dff7bd89d4adda86a9bb2a1c75c043a28e9dd9185f028d19dcc6a8" exitCode=0 Dec 08 17:59:59 crc kubenswrapper[5118]: I1208 17:59:59.190358 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-pbhxq" event={"ID":"612790c4-c2da-4318-89f8-c7745da26ece","Type":"ContainerDied","Data":"49a2d94e35dff7bd89d4adda86a9bb2a1c75c043a28e9dd9185f028d19dcc6a8"} Dec 08 17:59:59 crc kubenswrapper[5118]: I1208 17:59:59.191353 5118 scope.go:117] "RemoveContainer" containerID="49a2d94e35dff7bd89d4adda86a9bb2a1c75c043a28e9dd9185f028d19dcc6a8" Dec 08 18:00:00 crc kubenswrapper[5118]: I1208 18:00:00.196300 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29420280-hxvtb"] Dec 08 18:00:00 crc kubenswrapper[5118]: I1208 18:00:00.209230 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29420280-hxvtb" Dec 08 18:00:00 crc kubenswrapper[5118]: I1208 18:00:00.210106 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29420280-hxvtb"] Dec 08 18:00:00 crc kubenswrapper[5118]: I1208 18:00:00.212383 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Dec 08 18:00:00 crc kubenswrapper[5118]: I1208 18:00:00.214230 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Dec 08 18:00:00 crc kubenswrapper[5118]: I1208 18:00:00.303748 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vr6c\" (UniqueName: \"kubernetes.io/projected/730f299b-bb80-45b1-a8bc-a10ce2e3567b-kube-api-access-5vr6c\") pod \"collect-profiles-29420280-hxvtb\" (UID: \"730f299b-bb80-45b1-a8bc-a10ce2e3567b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420280-hxvtb" Dec 08 18:00:00 crc kubenswrapper[5118]: I1208 18:00:00.303995 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/730f299b-bb80-45b1-a8bc-a10ce2e3567b-secret-volume\") pod \"collect-profiles-29420280-hxvtb\" (UID: \"730f299b-bb80-45b1-a8bc-a10ce2e3567b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420280-hxvtb" Dec 08 18:00:00 crc kubenswrapper[5118]: I1208 18:00:00.304139 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/730f299b-bb80-45b1-a8bc-a10ce2e3567b-config-volume\") pod \"collect-profiles-29420280-hxvtb\" (UID: \"730f299b-bb80-45b1-a8bc-a10ce2e3567b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420280-hxvtb" Dec 08 18:00:00 crc kubenswrapper[5118]: I1208 18:00:00.405911 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5vr6c\" (UniqueName: \"kubernetes.io/projected/730f299b-bb80-45b1-a8bc-a10ce2e3567b-kube-api-access-5vr6c\") pod \"collect-profiles-29420280-hxvtb\" (UID: \"730f299b-bb80-45b1-a8bc-a10ce2e3567b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420280-hxvtb" Dec 08 18:00:00 crc kubenswrapper[5118]: I1208 18:00:00.405996 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/730f299b-bb80-45b1-a8bc-a10ce2e3567b-secret-volume\") pod \"collect-profiles-29420280-hxvtb\" (UID: \"730f299b-bb80-45b1-a8bc-a10ce2e3567b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420280-hxvtb" Dec 08 18:00:00 crc kubenswrapper[5118]: I1208 18:00:00.406035 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/730f299b-bb80-45b1-a8bc-a10ce2e3567b-config-volume\") pod \"collect-profiles-29420280-hxvtb\" (UID: \"730f299b-bb80-45b1-a8bc-a10ce2e3567b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420280-hxvtb" Dec 08 18:00:00 crc kubenswrapper[5118]: I1208 18:00:00.406947 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/730f299b-bb80-45b1-a8bc-a10ce2e3567b-config-volume\") pod \"collect-profiles-29420280-hxvtb\" (UID: \"730f299b-bb80-45b1-a8bc-a10ce2e3567b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420280-hxvtb" Dec 08 18:00:00 crc kubenswrapper[5118]: I1208 18:00:00.414395 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/730f299b-bb80-45b1-a8bc-a10ce2e3567b-secret-volume\") pod \"collect-profiles-29420280-hxvtb\" (UID: \"730f299b-bb80-45b1-a8bc-a10ce2e3567b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420280-hxvtb" Dec 08 18:00:00 crc kubenswrapper[5118]: I1208 18:00:00.436301 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5vr6c\" (UniqueName: \"kubernetes.io/projected/730f299b-bb80-45b1-a8bc-a10ce2e3567b-kube-api-access-5vr6c\") pod \"collect-profiles-29420280-hxvtb\" (UID: \"730f299b-bb80-45b1-a8bc-a10ce2e3567b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420280-hxvtb" Dec 08 18:00:00 crc kubenswrapper[5118]: I1208 18:00:00.524692 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29420280-hxvtb" Dec 08 18:00:00 crc kubenswrapper[5118]: I1208 18:00:00.845035 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29420280-hxvtb"] Dec 08 18:00:01 crc kubenswrapper[5118]: I1208 18:00:01.209399 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29420280-hxvtb" event={"ID":"730f299b-bb80-45b1-a8bc-a10ce2e3567b","Type":"ContainerStarted","Data":"b1a39f3165381c248d7aceee620976eb32cc03cf894e0c079d6cda93dd4f3f8a"} Dec 08 18:00:01 crc kubenswrapper[5118]: I1208 18:00:01.209437 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29420280-hxvtb" event={"ID":"730f299b-bb80-45b1-a8bc-a10ce2e3567b","Type":"ContainerStarted","Data":"b5bd753bbb1e3b7a1a3a45a2e95e5ba50be111ef86f9fb933a18cfd783496e8d"} Dec 08 18:00:02 crc kubenswrapper[5118]: I1208 18:00:02.030382 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-jlbqc" Dec 08 18:00:02 crc kubenswrapper[5118]: I1208 18:00:02.030466 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-jlbqc" Dec 08 18:00:02 crc kubenswrapper[5118]: I1208 18:00:02.073006 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-jlbqc" Dec 08 18:00:02 crc kubenswrapper[5118]: I1208 18:00:02.221970 5118 generic.go:358] "Generic (PLEG): container finished" podID="730f299b-bb80-45b1-a8bc-a10ce2e3567b" containerID="b1a39f3165381c248d7aceee620976eb32cc03cf894e0c079d6cda93dd4f3f8a" exitCode=0 Dec 08 18:00:02 crc kubenswrapper[5118]: I1208 18:00:02.222053 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29420280-hxvtb" event={"ID":"730f299b-bb80-45b1-a8bc-a10ce2e3567b","Type":"ContainerDied","Data":"b1a39f3165381c248d7aceee620976eb32cc03cf894e0c079d6cda93dd4f3f8a"} Dec 08 18:00:02 crc kubenswrapper[5118]: I1208 18:00:02.224607 5118 generic.go:358] "Generic (PLEG): container finished" podID="612790c4-c2da-4318-89f8-c7745da26ece" containerID="ae6ee93a5a6d6a767e3bbb450044418d804b86e821540a775c6b84a6df04014f" exitCode=0 Dec 08 18:00:02 crc kubenswrapper[5118]: I1208 18:00:02.224646 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-pbhxq" event={"ID":"612790c4-c2da-4318-89f8-c7745da26ece","Type":"ContainerDied","Data":"ae6ee93a5a6d6a767e3bbb450044418d804b86e821540a775c6b84a6df04014f"} Dec 08 18:00:02 crc kubenswrapper[5118]: I1208 18:00:02.274241 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-jlbqc" Dec 08 18:00:02 crc kubenswrapper[5118]: I1208 18:00:02.391722 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jlbqc"] Dec 08 18:00:02 crc kubenswrapper[5118]: I1208 18:00:02.555031 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29420280-hxvtb" Dec 08 18:00:02 crc kubenswrapper[5118]: I1208 18:00:02.641129 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5vr6c\" (UniqueName: \"kubernetes.io/projected/730f299b-bb80-45b1-a8bc-a10ce2e3567b-kube-api-access-5vr6c\") pod \"730f299b-bb80-45b1-a8bc-a10ce2e3567b\" (UID: \"730f299b-bb80-45b1-a8bc-a10ce2e3567b\") " Dec 08 18:00:02 crc kubenswrapper[5118]: I1208 18:00:02.641198 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/730f299b-bb80-45b1-a8bc-a10ce2e3567b-secret-volume\") pod \"730f299b-bb80-45b1-a8bc-a10ce2e3567b\" (UID: \"730f299b-bb80-45b1-a8bc-a10ce2e3567b\") " Dec 08 18:00:02 crc kubenswrapper[5118]: I1208 18:00:02.641371 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/730f299b-bb80-45b1-a8bc-a10ce2e3567b-config-volume\") pod \"730f299b-bb80-45b1-a8bc-a10ce2e3567b\" (UID: \"730f299b-bb80-45b1-a8bc-a10ce2e3567b\") " Dec 08 18:00:02 crc kubenswrapper[5118]: I1208 18:00:02.642237 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/730f299b-bb80-45b1-a8bc-a10ce2e3567b-config-volume" (OuterVolumeSpecName: "config-volume") pod "730f299b-bb80-45b1-a8bc-a10ce2e3567b" (UID: "730f299b-bb80-45b1-a8bc-a10ce2e3567b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:00:02 crc kubenswrapper[5118]: I1208 18:00:02.646643 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/730f299b-bb80-45b1-a8bc-a10ce2e3567b-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "730f299b-bb80-45b1-a8bc-a10ce2e3567b" (UID: "730f299b-bb80-45b1-a8bc-a10ce2e3567b"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:00:02 crc kubenswrapper[5118]: I1208 18:00:02.648090 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/730f299b-bb80-45b1-a8bc-a10ce2e3567b-kube-api-access-5vr6c" (OuterVolumeSpecName: "kube-api-access-5vr6c") pod "730f299b-bb80-45b1-a8bc-a10ce2e3567b" (UID: "730f299b-bb80-45b1-a8bc-a10ce2e3567b"). InnerVolumeSpecName "kube-api-access-5vr6c". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:00:02 crc kubenswrapper[5118]: I1208 18:00:02.743222 5118 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/730f299b-bb80-45b1-a8bc-a10ce2e3567b-config-volume\") on node \"crc\" DevicePath \"\"" Dec 08 18:00:02 crc kubenswrapper[5118]: I1208 18:00:02.743261 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5vr6c\" (UniqueName: \"kubernetes.io/projected/730f299b-bb80-45b1-a8bc-a10ce2e3567b-kube-api-access-5vr6c\") on node \"crc\" DevicePath \"\"" Dec 08 18:00:02 crc kubenswrapper[5118]: I1208 18:00:02.743274 5118 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/730f299b-bb80-45b1-a8bc-a10ce2e3567b-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 08 18:00:03 crc kubenswrapper[5118]: I1208 18:00:03.238765 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29420280-hxvtb" Dec 08 18:00:03 crc kubenswrapper[5118]: I1208 18:00:03.242074 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29420280-hxvtb" event={"ID":"730f299b-bb80-45b1-a8bc-a10ce2e3567b","Type":"ContainerDied","Data":"b5bd753bbb1e3b7a1a3a45a2e95e5ba50be111ef86f9fb933a18cfd783496e8d"} Dec 08 18:00:03 crc kubenswrapper[5118]: I1208 18:00:03.242218 5118 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b5bd753bbb1e3b7a1a3a45a2e95e5ba50be111ef86f9fb933a18cfd783496e8d" Dec 08 18:00:03 crc kubenswrapper[5118]: I1208 18:00:03.521742 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-pbhxq" Dec 08 18:00:03 crc kubenswrapper[5118]: I1208 18:00:03.653729 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/612790c4-c2da-4318-89f8-c7745da26ece-ceilometer-entrypoint-script\") pod \"612790c4-c2da-4318-89f8-c7745da26ece\" (UID: \"612790c4-c2da-4318-89f8-c7745da26ece\") " Dec 08 18:00:03 crc kubenswrapper[5118]: I1208 18:00:03.654058 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/612790c4-c2da-4318-89f8-c7745da26ece-collectd-config\") pod \"612790c4-c2da-4318-89f8-c7745da26ece\" (UID: \"612790c4-c2da-4318-89f8-c7745da26ece\") " Dec 08 18:00:03 crc kubenswrapper[5118]: I1208 18:00:03.654136 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xng2\" (UniqueName: \"kubernetes.io/projected/612790c4-c2da-4318-89f8-c7745da26ece-kube-api-access-9xng2\") pod \"612790c4-c2da-4318-89f8-c7745da26ece\" (UID: \"612790c4-c2da-4318-89f8-c7745da26ece\") " Dec 08 18:00:03 crc kubenswrapper[5118]: I1208 18:00:03.654159 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/612790c4-c2da-4318-89f8-c7745da26ece-healthcheck-log\") pod \"612790c4-c2da-4318-89f8-c7745da26ece\" (UID: \"612790c4-c2da-4318-89f8-c7745da26ece\") " Dec 08 18:00:03 crc kubenswrapper[5118]: I1208 18:00:03.654249 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/612790c4-c2da-4318-89f8-c7745da26ece-collectd-entrypoint-script\") pod \"612790c4-c2da-4318-89f8-c7745da26ece\" (UID: \"612790c4-c2da-4318-89f8-c7745da26ece\") " Dec 08 18:00:03 crc kubenswrapper[5118]: I1208 18:00:03.654290 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/612790c4-c2da-4318-89f8-c7745da26ece-sensubility-config\") pod \"612790c4-c2da-4318-89f8-c7745da26ece\" (UID: \"612790c4-c2da-4318-89f8-c7745da26ece\") " Dec 08 18:00:03 crc kubenswrapper[5118]: I1208 18:00:03.654359 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/612790c4-c2da-4318-89f8-c7745da26ece-ceilometer-publisher\") pod \"612790c4-c2da-4318-89f8-c7745da26ece\" (UID: \"612790c4-c2da-4318-89f8-c7745da26ece\") " Dec 08 18:00:03 crc kubenswrapper[5118]: I1208 18:00:03.667961 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/612790c4-c2da-4318-89f8-c7745da26ece-kube-api-access-9xng2" (OuterVolumeSpecName: "kube-api-access-9xng2") pod "612790c4-c2da-4318-89f8-c7745da26ece" (UID: "612790c4-c2da-4318-89f8-c7745da26ece"). InnerVolumeSpecName "kube-api-access-9xng2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:00:03 crc kubenswrapper[5118]: I1208 18:00:03.669194 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/612790c4-c2da-4318-89f8-c7745da26ece-ceilometer-entrypoint-script" (OuterVolumeSpecName: "ceilometer-entrypoint-script") pod "612790c4-c2da-4318-89f8-c7745da26ece" (UID: "612790c4-c2da-4318-89f8-c7745da26ece"). InnerVolumeSpecName "ceilometer-entrypoint-script". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:00:03 crc kubenswrapper[5118]: I1208 18:00:03.671940 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/612790c4-c2da-4318-89f8-c7745da26ece-collectd-config" (OuterVolumeSpecName: "collectd-config") pod "612790c4-c2da-4318-89f8-c7745da26ece" (UID: "612790c4-c2da-4318-89f8-c7745da26ece"). InnerVolumeSpecName "collectd-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:00:03 crc kubenswrapper[5118]: I1208 18:00:03.674346 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/612790c4-c2da-4318-89f8-c7745da26ece-healthcheck-log" (OuterVolumeSpecName: "healthcheck-log") pod "612790c4-c2da-4318-89f8-c7745da26ece" (UID: "612790c4-c2da-4318-89f8-c7745da26ece"). InnerVolumeSpecName "healthcheck-log". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:00:03 crc kubenswrapper[5118]: I1208 18:00:03.680017 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/612790c4-c2da-4318-89f8-c7745da26ece-sensubility-config" (OuterVolumeSpecName: "sensubility-config") pod "612790c4-c2da-4318-89f8-c7745da26ece" (UID: "612790c4-c2da-4318-89f8-c7745da26ece"). InnerVolumeSpecName "sensubility-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:00:03 crc kubenswrapper[5118]: I1208 18:00:03.683007 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/612790c4-c2da-4318-89f8-c7745da26ece-collectd-entrypoint-script" (OuterVolumeSpecName: "collectd-entrypoint-script") pod "612790c4-c2da-4318-89f8-c7745da26ece" (UID: "612790c4-c2da-4318-89f8-c7745da26ece"). InnerVolumeSpecName "collectd-entrypoint-script". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:00:03 crc kubenswrapper[5118]: I1208 18:00:03.685040 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/612790c4-c2da-4318-89f8-c7745da26ece-ceilometer-publisher" (OuterVolumeSpecName: "ceilometer-publisher") pod "612790c4-c2da-4318-89f8-c7745da26ece" (UID: "612790c4-c2da-4318-89f8-c7745da26ece"). InnerVolumeSpecName "ceilometer-publisher". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:00:03 crc kubenswrapper[5118]: I1208 18:00:03.756731 5118 reconciler_common.go:299] "Volume detached for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/612790c4-c2da-4318-89f8-c7745da26ece-ceilometer-entrypoint-script\") on node \"crc\" DevicePath \"\"" Dec 08 18:00:03 crc kubenswrapper[5118]: I1208 18:00:03.756769 5118 reconciler_common.go:299] "Volume detached for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/612790c4-c2da-4318-89f8-c7745da26ece-collectd-config\") on node \"crc\" DevicePath \"\"" Dec 08 18:00:03 crc kubenswrapper[5118]: I1208 18:00:03.756782 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9xng2\" (UniqueName: \"kubernetes.io/projected/612790c4-c2da-4318-89f8-c7745da26ece-kube-api-access-9xng2\") on node \"crc\" DevicePath \"\"" Dec 08 18:00:03 crc kubenswrapper[5118]: I1208 18:00:03.756794 5118 reconciler_common.go:299] "Volume detached for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/612790c4-c2da-4318-89f8-c7745da26ece-healthcheck-log\") on node \"crc\" DevicePath \"\"" Dec 08 18:00:03 crc kubenswrapper[5118]: I1208 18:00:03.756807 5118 reconciler_common.go:299] "Volume detached for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/612790c4-c2da-4318-89f8-c7745da26ece-collectd-entrypoint-script\") on node \"crc\" DevicePath \"\"" Dec 08 18:00:03 crc kubenswrapper[5118]: I1208 18:00:03.756816 5118 reconciler_common.go:299] "Volume detached for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/612790c4-c2da-4318-89f8-c7745da26ece-sensubility-config\") on node \"crc\" DevicePath \"\"" Dec 08 18:00:03 crc kubenswrapper[5118]: I1208 18:00:03.756826 5118 reconciler_common.go:299] "Volume detached for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/612790c4-c2da-4318-89f8-c7745da26ece-ceilometer-publisher\") on node \"crc\" DevicePath \"\"" Dec 08 18:00:04 crc kubenswrapper[5118]: I1208 18:00:04.247588 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-pbhxq" Dec 08 18:00:04 crc kubenswrapper[5118]: I1208 18:00:04.247588 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-pbhxq" event={"ID":"612790c4-c2da-4318-89f8-c7745da26ece","Type":"ContainerDied","Data":"d780467c91a7b84e66a1c6b108d5d66e85dec21a5e28ea3e2bcd71f25b565354"} Dec 08 18:00:04 crc kubenswrapper[5118]: I1208 18:00:04.247759 5118 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d780467c91a7b84e66a1c6b108d5d66e85dec21a5e28ea3e2bcd71f25b565354" Dec 08 18:00:04 crc kubenswrapper[5118]: I1208 18:00:04.248345 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-jlbqc" podUID="5dcfd2a5-06dd-4fc6-ad8f-8979503b1a97" containerName="registry-server" containerID="cri-o://50f6a51e07f70b4a49764ac139767c8bdc4ebd5264f6a7c160dcecd9c1e7b4e8" gracePeriod=2 Dec 08 18:00:04 crc kubenswrapper[5118]: I1208 18:00:04.663311 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jlbqc" Dec 08 18:00:04 crc kubenswrapper[5118]: I1208 18:00:04.740156 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5dcfd2a5-06dd-4fc6-ad8f-8979503b1a97-utilities\") pod \"5dcfd2a5-06dd-4fc6-ad8f-8979503b1a97\" (UID: \"5dcfd2a5-06dd-4fc6-ad8f-8979503b1a97\") " Dec 08 18:00:04 crc kubenswrapper[5118]: I1208 18:00:04.740409 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5dcfd2a5-06dd-4fc6-ad8f-8979503b1a97-catalog-content\") pod \"5dcfd2a5-06dd-4fc6-ad8f-8979503b1a97\" (UID: \"5dcfd2a5-06dd-4fc6-ad8f-8979503b1a97\") " Dec 08 18:00:04 crc kubenswrapper[5118]: I1208 18:00:04.740431 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnx7l\" (UniqueName: \"kubernetes.io/projected/5dcfd2a5-06dd-4fc6-ad8f-8979503b1a97-kube-api-access-rnx7l\") pod \"5dcfd2a5-06dd-4fc6-ad8f-8979503b1a97\" (UID: \"5dcfd2a5-06dd-4fc6-ad8f-8979503b1a97\") " Dec 08 18:00:04 crc kubenswrapper[5118]: I1208 18:00:04.741197 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5dcfd2a5-06dd-4fc6-ad8f-8979503b1a97-utilities" (OuterVolumeSpecName: "utilities") pod "5dcfd2a5-06dd-4fc6-ad8f-8979503b1a97" (UID: "5dcfd2a5-06dd-4fc6-ad8f-8979503b1a97"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 18:00:04 crc kubenswrapper[5118]: I1208 18:00:04.748674 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5dcfd2a5-06dd-4fc6-ad8f-8979503b1a97-kube-api-access-rnx7l" (OuterVolumeSpecName: "kube-api-access-rnx7l") pod "5dcfd2a5-06dd-4fc6-ad8f-8979503b1a97" (UID: "5dcfd2a5-06dd-4fc6-ad8f-8979503b1a97"). InnerVolumeSpecName "kube-api-access-rnx7l". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:00:04 crc kubenswrapper[5118]: I1208 18:00:04.791947 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5dcfd2a5-06dd-4fc6-ad8f-8979503b1a97-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5dcfd2a5-06dd-4fc6-ad8f-8979503b1a97" (UID: "5dcfd2a5-06dd-4fc6-ad8f-8979503b1a97"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 18:00:04 crc kubenswrapper[5118]: I1208 18:00:04.842221 5118 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5dcfd2a5-06dd-4fc6-ad8f-8979503b1a97-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 18:00:04 crc kubenswrapper[5118]: I1208 18:00:04.842271 5118 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5dcfd2a5-06dd-4fc6-ad8f-8979503b1a97-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 18:00:04 crc kubenswrapper[5118]: I1208 18:00:04.842284 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rnx7l\" (UniqueName: \"kubernetes.io/projected/5dcfd2a5-06dd-4fc6-ad8f-8979503b1a97-kube-api-access-rnx7l\") on node \"crc\" DevicePath \"\"" Dec 08 18:00:05 crc kubenswrapper[5118]: I1208 18:00:05.261231 5118 generic.go:358] "Generic (PLEG): container finished" podID="5dcfd2a5-06dd-4fc6-ad8f-8979503b1a97" containerID="50f6a51e07f70b4a49764ac139767c8bdc4ebd5264f6a7c160dcecd9c1e7b4e8" exitCode=0 Dec 08 18:00:05 crc kubenswrapper[5118]: I1208 18:00:05.261299 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jlbqc" event={"ID":"5dcfd2a5-06dd-4fc6-ad8f-8979503b1a97","Type":"ContainerDied","Data":"50f6a51e07f70b4a49764ac139767c8bdc4ebd5264f6a7c160dcecd9c1e7b4e8"} Dec 08 18:00:05 crc kubenswrapper[5118]: I1208 18:00:05.261374 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jlbqc" event={"ID":"5dcfd2a5-06dd-4fc6-ad8f-8979503b1a97","Type":"ContainerDied","Data":"f65e8336c15dfa0952bc851e6729d6f5b18e9601bde2f081d6c11bed66bb7f75"} Dec 08 18:00:05 crc kubenswrapper[5118]: I1208 18:00:05.261393 5118 scope.go:117] "RemoveContainer" containerID="50f6a51e07f70b4a49764ac139767c8bdc4ebd5264f6a7c160dcecd9c1e7b4e8" Dec 08 18:00:05 crc kubenswrapper[5118]: I1208 18:00:05.261421 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jlbqc" Dec 08 18:00:05 crc kubenswrapper[5118]: I1208 18:00:05.294030 5118 scope.go:117] "RemoveContainer" containerID="bd3b31cb90ce672afe81260e5b03c02a8f5558be78dfea0ce9b8bec936061e84" Dec 08 18:00:05 crc kubenswrapper[5118]: I1208 18:00:05.317902 5118 scope.go:117] "RemoveContainer" containerID="29e59715b0ccab033dd5da542625fc9b1d4fba44114e5f710464480e8f2541f0" Dec 08 18:00:05 crc kubenswrapper[5118]: I1208 18:00:05.332368 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jlbqc"] Dec 08 18:00:05 crc kubenswrapper[5118]: I1208 18:00:05.346100 5118 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-jlbqc"] Dec 08 18:00:05 crc kubenswrapper[5118]: I1208 18:00:05.367035 5118 scope.go:117] "RemoveContainer" containerID="50f6a51e07f70b4a49764ac139767c8bdc4ebd5264f6a7c160dcecd9c1e7b4e8" Dec 08 18:00:05 crc kubenswrapper[5118]: E1208 18:00:05.367615 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"50f6a51e07f70b4a49764ac139767c8bdc4ebd5264f6a7c160dcecd9c1e7b4e8\": container with ID starting with 50f6a51e07f70b4a49764ac139767c8bdc4ebd5264f6a7c160dcecd9c1e7b4e8 not found: ID does not exist" containerID="50f6a51e07f70b4a49764ac139767c8bdc4ebd5264f6a7c160dcecd9c1e7b4e8" Dec 08 18:00:05 crc kubenswrapper[5118]: I1208 18:00:05.367701 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50f6a51e07f70b4a49764ac139767c8bdc4ebd5264f6a7c160dcecd9c1e7b4e8"} err="failed to get container status \"50f6a51e07f70b4a49764ac139767c8bdc4ebd5264f6a7c160dcecd9c1e7b4e8\": rpc error: code = NotFound desc = could not find container \"50f6a51e07f70b4a49764ac139767c8bdc4ebd5264f6a7c160dcecd9c1e7b4e8\": container with ID starting with 50f6a51e07f70b4a49764ac139767c8bdc4ebd5264f6a7c160dcecd9c1e7b4e8 not found: ID does not exist" Dec 08 18:00:05 crc kubenswrapper[5118]: I1208 18:00:05.367734 5118 scope.go:117] "RemoveContainer" containerID="bd3b31cb90ce672afe81260e5b03c02a8f5558be78dfea0ce9b8bec936061e84" Dec 08 18:00:05 crc kubenswrapper[5118]: E1208 18:00:05.368208 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd3b31cb90ce672afe81260e5b03c02a8f5558be78dfea0ce9b8bec936061e84\": container with ID starting with bd3b31cb90ce672afe81260e5b03c02a8f5558be78dfea0ce9b8bec936061e84 not found: ID does not exist" containerID="bd3b31cb90ce672afe81260e5b03c02a8f5558be78dfea0ce9b8bec936061e84" Dec 08 18:00:05 crc kubenswrapper[5118]: I1208 18:00:05.368265 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd3b31cb90ce672afe81260e5b03c02a8f5558be78dfea0ce9b8bec936061e84"} err="failed to get container status \"bd3b31cb90ce672afe81260e5b03c02a8f5558be78dfea0ce9b8bec936061e84\": rpc error: code = NotFound desc = could not find container \"bd3b31cb90ce672afe81260e5b03c02a8f5558be78dfea0ce9b8bec936061e84\": container with ID starting with bd3b31cb90ce672afe81260e5b03c02a8f5558be78dfea0ce9b8bec936061e84 not found: ID does not exist" Dec 08 18:00:05 crc kubenswrapper[5118]: I1208 18:00:05.368301 5118 scope.go:117] "RemoveContainer" containerID="29e59715b0ccab033dd5da542625fc9b1d4fba44114e5f710464480e8f2541f0" Dec 08 18:00:05 crc kubenswrapper[5118]: E1208 18:00:05.368811 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"29e59715b0ccab033dd5da542625fc9b1d4fba44114e5f710464480e8f2541f0\": container with ID starting with 29e59715b0ccab033dd5da542625fc9b1d4fba44114e5f710464480e8f2541f0 not found: ID does not exist" containerID="29e59715b0ccab033dd5da542625fc9b1d4fba44114e5f710464480e8f2541f0" Dec 08 18:00:05 crc kubenswrapper[5118]: I1208 18:00:05.368844 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29e59715b0ccab033dd5da542625fc9b1d4fba44114e5f710464480e8f2541f0"} err="failed to get container status \"29e59715b0ccab033dd5da542625fc9b1d4fba44114e5f710464480e8f2541f0\": rpc error: code = NotFound desc = could not find container \"29e59715b0ccab033dd5da542625fc9b1d4fba44114e5f710464480e8f2541f0\": container with ID starting with 29e59715b0ccab033dd5da542625fc9b1d4fba44114e5f710464480e8f2541f0 not found: ID does not exist" Dec 08 18:00:05 crc kubenswrapper[5118]: I1208 18:00:05.440135 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5dcfd2a5-06dd-4fc6-ad8f-8979503b1a97" path="/var/lib/kubelet/pods/5dcfd2a5-06dd-4fc6-ad8f-8979503b1a97/volumes" Dec 08 18:00:25 crc kubenswrapper[5118]: I1208 18:00:25.066692 5118 ???:1] "http: TLS handshake error from 192.168.126.11:47780: no serving certificate available for the kubelet" Dec 08 18:00:31 crc kubenswrapper[5118]: I1208 18:00:31.962998 5118 patch_prober.go:28] interesting pod/machine-config-daemon-8vxnt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 18:00:31 crc kubenswrapper[5118]: I1208 18:00:31.963751 5118 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8vxnt" podUID="cee6a3dc-47d4-4996-9c78-cb6c6b626d71" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 18:00:34 crc kubenswrapper[5118]: I1208 18:00:34.758461 5118 ???:1] "http: TLS handshake error from 192.168.126.11:33524: no serving certificate available for the kubelet" Dec 08 18:00:55 crc kubenswrapper[5118]: I1208 18:00:55.252213 5118 ???:1] "http: TLS handshake error from 192.168.126.11:59996: no serving certificate available for the kubelet" Dec 08 18:01:01 crc kubenswrapper[5118]: I1208 18:01:01.962185 5118 patch_prober.go:28] interesting pod/machine-config-daemon-8vxnt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 18:01:01 crc kubenswrapper[5118]: I1208 18:01:01.962802 5118 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8vxnt" podUID="cee6a3dc-47d4-4996-9c78-cb6c6b626d71" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 18:01:18 crc kubenswrapper[5118]: I1208 18:01:18.240538 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/infrawatch-operators-b88kp"] Dec 08 18:01:18 crc kubenswrapper[5118]: I1208 18:01:18.241650 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="612790c4-c2da-4318-89f8-c7745da26ece" containerName="smoketest-ceilometer" Dec 08 18:01:18 crc kubenswrapper[5118]: I1208 18:01:18.241664 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="612790c4-c2da-4318-89f8-c7745da26ece" containerName="smoketest-ceilometer" Dec 08 18:01:18 crc kubenswrapper[5118]: I1208 18:01:18.241672 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5dcfd2a5-06dd-4fc6-ad8f-8979503b1a97" containerName="registry-server" Dec 08 18:01:18 crc kubenswrapper[5118]: I1208 18:01:18.241677 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="5dcfd2a5-06dd-4fc6-ad8f-8979503b1a97" containerName="registry-server" Dec 08 18:01:18 crc kubenswrapper[5118]: I1208 18:01:18.241688 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="612790c4-c2da-4318-89f8-c7745da26ece" containerName="smoketest-collectd" Dec 08 18:01:18 crc kubenswrapper[5118]: I1208 18:01:18.241693 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="612790c4-c2da-4318-89f8-c7745da26ece" containerName="smoketest-collectd" Dec 08 18:01:18 crc kubenswrapper[5118]: I1208 18:01:18.241711 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="730f299b-bb80-45b1-a8bc-a10ce2e3567b" containerName="collect-profiles" Dec 08 18:01:18 crc kubenswrapper[5118]: I1208 18:01:18.241718 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="730f299b-bb80-45b1-a8bc-a10ce2e3567b" containerName="collect-profiles" Dec 08 18:01:18 crc kubenswrapper[5118]: I1208 18:01:18.241737 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5dcfd2a5-06dd-4fc6-ad8f-8979503b1a97" containerName="extract-content" Dec 08 18:01:18 crc kubenswrapper[5118]: I1208 18:01:18.241742 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="5dcfd2a5-06dd-4fc6-ad8f-8979503b1a97" containerName="extract-content" Dec 08 18:01:18 crc kubenswrapper[5118]: I1208 18:01:18.241763 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5dcfd2a5-06dd-4fc6-ad8f-8979503b1a97" containerName="extract-utilities" Dec 08 18:01:18 crc kubenswrapper[5118]: I1208 18:01:18.241768 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="5dcfd2a5-06dd-4fc6-ad8f-8979503b1a97" containerName="extract-utilities" Dec 08 18:01:18 crc kubenswrapper[5118]: I1208 18:01:18.241905 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="612790c4-c2da-4318-89f8-c7745da26ece" containerName="smoketest-ceilometer" Dec 08 18:01:18 crc kubenswrapper[5118]: I1208 18:01:18.241919 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="612790c4-c2da-4318-89f8-c7745da26ece" containerName="smoketest-collectd" Dec 08 18:01:18 crc kubenswrapper[5118]: I1208 18:01:18.241929 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="5dcfd2a5-06dd-4fc6-ad8f-8979503b1a97" containerName="registry-server" Dec 08 18:01:18 crc kubenswrapper[5118]: I1208 18:01:18.241942 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="730f299b-bb80-45b1-a8bc-a10ce2e3567b" containerName="collect-profiles" Dec 08 18:01:18 crc kubenswrapper[5118]: I1208 18:01:18.266045 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-b88kp" Dec 08 18:01:18 crc kubenswrapper[5118]: I1208 18:01:18.267441 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-b88kp"] Dec 08 18:01:18 crc kubenswrapper[5118]: I1208 18:01:18.396761 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qd82\" (UniqueName: \"kubernetes.io/projected/b3a22077-e946-43fe-a687-8eb0a8454203-kube-api-access-4qd82\") pod \"infrawatch-operators-b88kp\" (UID: \"b3a22077-e946-43fe-a687-8eb0a8454203\") " pod="service-telemetry/infrawatch-operators-b88kp" Dec 08 18:01:18 crc kubenswrapper[5118]: I1208 18:01:18.499173 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4qd82\" (UniqueName: \"kubernetes.io/projected/b3a22077-e946-43fe-a687-8eb0a8454203-kube-api-access-4qd82\") pod \"infrawatch-operators-b88kp\" (UID: \"b3a22077-e946-43fe-a687-8eb0a8454203\") " pod="service-telemetry/infrawatch-operators-b88kp" Dec 08 18:01:18 crc kubenswrapper[5118]: I1208 18:01:18.528865 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qd82\" (UniqueName: \"kubernetes.io/projected/b3a22077-e946-43fe-a687-8eb0a8454203-kube-api-access-4qd82\") pod \"infrawatch-operators-b88kp\" (UID: \"b3a22077-e946-43fe-a687-8eb0a8454203\") " pod="service-telemetry/infrawatch-operators-b88kp" Dec 08 18:01:18 crc kubenswrapper[5118]: I1208 18:01:18.588462 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-b88kp" Dec 08 18:01:18 crc kubenswrapper[5118]: I1208 18:01:18.856468 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-b88kp"] Dec 08 18:01:18 crc kubenswrapper[5118]: I1208 18:01:18.867208 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-b88kp" event={"ID":"b3a22077-e946-43fe-a687-8eb0a8454203","Type":"ContainerStarted","Data":"92d99c5a4fa68f7524856ce7dcfd40196bc172967fc2489cf66b15e6dd7572b5"} Dec 08 18:01:19 crc kubenswrapper[5118]: I1208 18:01:19.875245 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-b88kp" event={"ID":"b3a22077-e946-43fe-a687-8eb0a8454203","Type":"ContainerStarted","Data":"5d296a6967a85999dde9cfbad24d7acdd531cbb08abea62f7d40922c90d72a36"} Dec 08 18:01:19 crc kubenswrapper[5118]: I1208 18:01:19.893587 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/infrawatch-operators-b88kp" podStartSLOduration=1.315129298 podStartE2EDuration="1.893567439s" podCreationTimestamp="2025-12-08 18:01:18 +0000 UTC" firstStartedPulling="2025-12-08 18:01:18.856518831 +0000 UTC m=+1135.757842925" lastFinishedPulling="2025-12-08 18:01:19.434956972 +0000 UTC m=+1136.336281066" observedRunningTime="2025-12-08 18:01:19.888437552 +0000 UTC m=+1136.789761666" watchObservedRunningTime="2025-12-08 18:01:19.893567439 +0000 UTC m=+1136.794891533" Dec 08 18:01:25 crc kubenswrapper[5118]: I1208 18:01:25.446793 5118 ???:1] "http: TLS handshake error from 192.168.126.11:56670: no serving certificate available for the kubelet" Dec 08 18:01:28 crc kubenswrapper[5118]: I1208 18:01:28.588918 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="service-telemetry/infrawatch-operators-b88kp" Dec 08 18:01:28 crc kubenswrapper[5118]: I1208 18:01:28.589286 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="service-telemetry/infrawatch-operators-b88kp" Dec 08 18:01:28 crc kubenswrapper[5118]: I1208 18:01:28.621742 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="service-telemetry/infrawatch-operators-b88kp" Dec 08 18:01:28 crc kubenswrapper[5118]: I1208 18:01:28.976845 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/infrawatch-operators-b88kp" Dec 08 18:01:29 crc kubenswrapper[5118]: I1208 18:01:29.024303 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-b88kp"] Dec 08 18:01:30 crc kubenswrapper[5118]: I1208 18:01:30.963533 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/infrawatch-operators-b88kp" podUID="b3a22077-e946-43fe-a687-8eb0a8454203" containerName="registry-server" containerID="cri-o://5d296a6967a85999dde9cfbad24d7acdd531cbb08abea62f7d40922c90d72a36" gracePeriod=2 Dec 08 18:01:31 crc kubenswrapper[5118]: I1208 18:01:31.510849 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-b88kp" Dec 08 18:01:31 crc kubenswrapper[5118]: I1208 18:01:31.596051 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4qd82\" (UniqueName: \"kubernetes.io/projected/b3a22077-e946-43fe-a687-8eb0a8454203-kube-api-access-4qd82\") pod \"b3a22077-e946-43fe-a687-8eb0a8454203\" (UID: \"b3a22077-e946-43fe-a687-8eb0a8454203\") " Dec 08 18:01:31 crc kubenswrapper[5118]: I1208 18:01:31.604912 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3a22077-e946-43fe-a687-8eb0a8454203-kube-api-access-4qd82" (OuterVolumeSpecName: "kube-api-access-4qd82") pod "b3a22077-e946-43fe-a687-8eb0a8454203" (UID: "b3a22077-e946-43fe-a687-8eb0a8454203"). InnerVolumeSpecName "kube-api-access-4qd82". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:01:31 crc kubenswrapper[5118]: I1208 18:01:31.697813 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4qd82\" (UniqueName: \"kubernetes.io/projected/b3a22077-e946-43fe-a687-8eb0a8454203-kube-api-access-4qd82\") on node \"crc\" DevicePath \"\"" Dec 08 18:01:31 crc kubenswrapper[5118]: I1208 18:01:31.963333 5118 patch_prober.go:28] interesting pod/machine-config-daemon-8vxnt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 18:01:31 crc kubenswrapper[5118]: I1208 18:01:31.963424 5118 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8vxnt" podUID="cee6a3dc-47d4-4996-9c78-cb6c6b626d71" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 18:01:31 crc kubenswrapper[5118]: I1208 18:01:31.963491 5118 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8vxnt" Dec 08 18:01:31 crc kubenswrapper[5118]: I1208 18:01:31.964487 5118 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"22c5cc7c9c4c3cf08ced7571f97d2349fe0ba35b6b6efc4a95ae6a4960c893da"} pod="openshift-machine-config-operator/machine-config-daemon-8vxnt" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 08 18:01:31 crc kubenswrapper[5118]: I1208 18:01:31.964594 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8vxnt" podUID="cee6a3dc-47d4-4996-9c78-cb6c6b626d71" containerName="machine-config-daemon" containerID="cri-o://22c5cc7c9c4c3cf08ced7571f97d2349fe0ba35b6b6efc4a95ae6a4960c893da" gracePeriod=600 Dec 08 18:01:31 crc kubenswrapper[5118]: I1208 18:01:31.978475 5118 generic.go:358] "Generic (PLEG): container finished" podID="b3a22077-e946-43fe-a687-8eb0a8454203" containerID="5d296a6967a85999dde9cfbad24d7acdd531cbb08abea62f7d40922c90d72a36" exitCode=0 Dec 08 18:01:31 crc kubenswrapper[5118]: I1208 18:01:31.978568 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-b88kp" Dec 08 18:01:31 crc kubenswrapper[5118]: I1208 18:01:31.978588 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-b88kp" event={"ID":"b3a22077-e946-43fe-a687-8eb0a8454203","Type":"ContainerDied","Data":"5d296a6967a85999dde9cfbad24d7acdd531cbb08abea62f7d40922c90d72a36"} Dec 08 18:01:31 crc kubenswrapper[5118]: I1208 18:01:31.978667 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-b88kp" event={"ID":"b3a22077-e946-43fe-a687-8eb0a8454203","Type":"ContainerDied","Data":"92d99c5a4fa68f7524856ce7dcfd40196bc172967fc2489cf66b15e6dd7572b5"} Dec 08 18:01:31 crc kubenswrapper[5118]: I1208 18:01:31.978688 5118 scope.go:117] "RemoveContainer" containerID="5d296a6967a85999dde9cfbad24d7acdd531cbb08abea62f7d40922c90d72a36" Dec 08 18:01:32 crc kubenswrapper[5118]: I1208 18:01:32.003458 5118 scope.go:117] "RemoveContainer" containerID="5d296a6967a85999dde9cfbad24d7acdd531cbb08abea62f7d40922c90d72a36" Dec 08 18:01:32 crc kubenswrapper[5118]: E1208 18:01:32.004464 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d296a6967a85999dde9cfbad24d7acdd531cbb08abea62f7d40922c90d72a36\": container with ID starting with 5d296a6967a85999dde9cfbad24d7acdd531cbb08abea62f7d40922c90d72a36 not found: ID does not exist" containerID="5d296a6967a85999dde9cfbad24d7acdd531cbb08abea62f7d40922c90d72a36" Dec 08 18:01:32 crc kubenswrapper[5118]: I1208 18:01:32.004510 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d296a6967a85999dde9cfbad24d7acdd531cbb08abea62f7d40922c90d72a36"} err="failed to get container status \"5d296a6967a85999dde9cfbad24d7acdd531cbb08abea62f7d40922c90d72a36\": rpc error: code = NotFound desc = could not find container \"5d296a6967a85999dde9cfbad24d7acdd531cbb08abea62f7d40922c90d72a36\": container with ID starting with 5d296a6967a85999dde9cfbad24d7acdd531cbb08abea62f7d40922c90d72a36 not found: ID does not exist" Dec 08 18:01:32 crc kubenswrapper[5118]: I1208 18:01:32.028096 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-b88kp"] Dec 08 18:01:32 crc kubenswrapper[5118]: I1208 18:01:32.033651 5118 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/infrawatch-operators-b88kp"] Dec 08 18:01:32 crc kubenswrapper[5118]: I1208 18:01:32.992034 5118 generic.go:358] "Generic (PLEG): container finished" podID="cee6a3dc-47d4-4996-9c78-cb6c6b626d71" containerID="22c5cc7c9c4c3cf08ced7571f97d2349fe0ba35b6b6efc4a95ae6a4960c893da" exitCode=0 Dec 08 18:01:32 crc kubenswrapper[5118]: I1208 18:01:32.992147 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8vxnt" event={"ID":"cee6a3dc-47d4-4996-9c78-cb6c6b626d71","Type":"ContainerDied","Data":"22c5cc7c9c4c3cf08ced7571f97d2349fe0ba35b6b6efc4a95ae6a4960c893da"} Dec 08 18:01:32 crc kubenswrapper[5118]: I1208 18:01:32.992729 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8vxnt" event={"ID":"cee6a3dc-47d4-4996-9c78-cb6c6b626d71","Type":"ContainerStarted","Data":"19427898a1b36b27d54897d19b17f5f4ac1fb5469ef84f99251a664345051ae2"} Dec 08 18:01:32 crc kubenswrapper[5118]: I1208 18:01:32.992777 5118 scope.go:117] "RemoveContainer" containerID="ba4ed75eee971f8ac62b8cc0f3802e18dd9cabd36e3862daae0b5ce56bd2f691" Dec 08 18:01:33 crc kubenswrapper[5118]: I1208 18:01:33.445273 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3a22077-e946-43fe-a687-8eb0a8454203" path="/var/lib/kubelet/pods/b3a22077-e946-43fe-a687-8eb0a8454203/volumes" Dec 08 18:01:56 crc kubenswrapper[5118]: I1208 18:01:56.715001 5118 ???:1] "http: TLS handshake error from 192.168.126.11:55808: no serving certificate available for the kubelet" Dec 08 18:01:57 crc kubenswrapper[5118]: I1208 18:01:57.034736 5118 ???:1] "http: TLS handshake error from 192.168.126.11:55820: no serving certificate available for the kubelet" Dec 08 18:01:57 crc kubenswrapper[5118]: I1208 18:01:57.302352 5118 ???:1] "http: TLS handshake error from 192.168.126.11:55828: no serving certificate available for the kubelet" Dec 08 18:01:57 crc kubenswrapper[5118]: I1208 18:01:57.575863 5118 ???:1] "http: TLS handshake error from 192.168.126.11:55832: no serving certificate available for the kubelet" Dec 08 18:01:57 crc kubenswrapper[5118]: I1208 18:01:57.865733 5118 ???:1] "http: TLS handshake error from 192.168.126.11:55848: no serving certificate available for the kubelet" Dec 08 18:01:58 crc kubenswrapper[5118]: I1208 18:01:58.169192 5118 ???:1] "http: TLS handshake error from 192.168.126.11:55858: no serving certificate available for the kubelet" Dec 08 18:01:58 crc kubenswrapper[5118]: I1208 18:01:58.443298 5118 ???:1] "http: TLS handshake error from 192.168.126.11:55870: no serving certificate available for the kubelet" Dec 08 18:01:58 crc kubenswrapper[5118]: I1208 18:01:58.707104 5118 ???:1] "http: TLS handshake error from 192.168.126.11:55872: no serving certificate available for the kubelet" Dec 08 18:01:58 crc kubenswrapper[5118]: I1208 18:01:58.956398 5118 ???:1] "http: TLS handshake error from 192.168.126.11:55882: no serving certificate available for the kubelet" Dec 08 18:01:59 crc kubenswrapper[5118]: I1208 18:01:59.218284 5118 ???:1] "http: TLS handshake error from 192.168.126.11:55890: no serving certificate available for the kubelet" Dec 08 18:01:59 crc kubenswrapper[5118]: I1208 18:01:59.520643 5118 ???:1] "http: TLS handshake error from 192.168.126.11:55898: no serving certificate available for the kubelet" Dec 08 18:01:59 crc kubenswrapper[5118]: I1208 18:01:59.775965 5118 ???:1] "http: TLS handshake error from 192.168.126.11:55900: no serving certificate available for the kubelet" Dec 08 18:02:00 crc kubenswrapper[5118]: I1208 18:02:00.099342 5118 ???:1] "http: TLS handshake error from 192.168.126.11:55904: no serving certificate available for the kubelet" Dec 08 18:02:00 crc kubenswrapper[5118]: I1208 18:02:00.381154 5118 ???:1] "http: TLS handshake error from 192.168.126.11:55916: no serving certificate available for the kubelet" Dec 08 18:02:00 crc kubenswrapper[5118]: I1208 18:02:00.619331 5118 ???:1] "http: TLS handshake error from 192.168.126.11:55926: no serving certificate available for the kubelet" Dec 08 18:02:00 crc kubenswrapper[5118]: I1208 18:02:00.925373 5118 ???:1] "http: TLS handshake error from 192.168.126.11:55928: no serving certificate available for the kubelet" Dec 08 18:02:01 crc kubenswrapper[5118]: I1208 18:02:01.238592 5118 ???:1] "http: TLS handshake error from 192.168.126.11:55942: no serving certificate available for the kubelet" Dec 08 18:02:01 crc kubenswrapper[5118]: I1208 18:02:01.525387 5118 ???:1] "http: TLS handshake error from 192.168.126.11:55948: no serving certificate available for the kubelet" Dec 08 18:02:01 crc kubenswrapper[5118]: I1208 18:02:01.802212 5118 ???:1] "http: TLS handshake error from 192.168.126.11:55964: no serving certificate available for the kubelet" Dec 08 18:02:14 crc kubenswrapper[5118]: I1208 18:02:14.286649 5118 ???:1] "http: TLS handshake error from 192.168.126.11:42162: no serving certificate available for the kubelet" Dec 08 18:02:14 crc kubenswrapper[5118]: I1208 18:02:14.540532 5118 ???:1] "http: TLS handshake error from 192.168.126.11:42166: no serving certificate available for the kubelet" Dec 08 18:02:14 crc kubenswrapper[5118]: I1208 18:02:14.804140 5118 ???:1] "http: TLS handshake error from 192.168.126.11:42180: no serving certificate available for the kubelet" Dec 08 18:02:23 crc kubenswrapper[5118]: I1208 18:02:23.843997 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-dlvbf_a091751f-234c-43ee-8324-ebb98bb3ec36/kube-multus/0.log" Dec 08 18:02:23 crc kubenswrapper[5118]: I1208 18:02:23.844789 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-dlvbf_a091751f-234c-43ee-8324-ebb98bb3ec36/kube-multus/0.log" Dec 08 18:02:23 crc kubenswrapper[5118]: I1208 18:02:23.852256 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Dec 08 18:02:23 crc kubenswrapper[5118]: I1208 18:02:23.852342 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Dec 08 18:02:40 crc kubenswrapper[5118]: I1208 18:02:40.547384 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-must-gather-gctth/must-gather-5cz8j"] Dec 08 18:02:40 crc kubenswrapper[5118]: I1208 18:02:40.548931 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b3a22077-e946-43fe-a687-8eb0a8454203" containerName="registry-server" Dec 08 18:02:40 crc kubenswrapper[5118]: I1208 18:02:40.548951 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3a22077-e946-43fe-a687-8eb0a8454203" containerName="registry-server" Dec 08 18:02:40 crc kubenswrapper[5118]: I1208 18:02:40.549079 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="b3a22077-e946-43fe-a687-8eb0a8454203" containerName="registry-server" Dec 08 18:02:40 crc kubenswrapper[5118]: I1208 18:02:40.560175 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gctth/must-gather-5cz8j" Dec 08 18:02:40 crc kubenswrapper[5118]: I1208 18:02:40.562720 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-must-gather-gctth\"/\"default-dockercfg-ddssk\"" Dec 08 18:02:40 crc kubenswrapper[5118]: I1208 18:02:40.562835 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-gctth\"/\"kube-root-ca.crt\"" Dec 08 18:02:40 crc kubenswrapper[5118]: I1208 18:02:40.563075 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-gctth\"/\"openshift-service-ca.crt\"" Dec 08 18:02:40 crc kubenswrapper[5118]: I1208 18:02:40.565223 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-gctth/must-gather-5cz8j"] Dec 08 18:02:40 crc kubenswrapper[5118]: I1208 18:02:40.664265 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/736c26bc-8908-4abc-89f5-7f1d201b7e1a-must-gather-output\") pod \"must-gather-5cz8j\" (UID: \"736c26bc-8908-4abc-89f5-7f1d201b7e1a\") " pod="openshift-must-gather-gctth/must-gather-5cz8j" Dec 08 18:02:40 crc kubenswrapper[5118]: I1208 18:02:40.664387 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9qc9\" (UniqueName: \"kubernetes.io/projected/736c26bc-8908-4abc-89f5-7f1d201b7e1a-kube-api-access-z9qc9\") pod \"must-gather-5cz8j\" (UID: \"736c26bc-8908-4abc-89f5-7f1d201b7e1a\") " pod="openshift-must-gather-gctth/must-gather-5cz8j" Dec 08 18:02:40 crc kubenswrapper[5118]: I1208 18:02:40.765296 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/736c26bc-8908-4abc-89f5-7f1d201b7e1a-must-gather-output\") pod \"must-gather-5cz8j\" (UID: \"736c26bc-8908-4abc-89f5-7f1d201b7e1a\") " pod="openshift-must-gather-gctth/must-gather-5cz8j" Dec 08 18:02:40 crc kubenswrapper[5118]: I1208 18:02:40.765411 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-z9qc9\" (UniqueName: \"kubernetes.io/projected/736c26bc-8908-4abc-89f5-7f1d201b7e1a-kube-api-access-z9qc9\") pod \"must-gather-5cz8j\" (UID: \"736c26bc-8908-4abc-89f5-7f1d201b7e1a\") " pod="openshift-must-gather-gctth/must-gather-5cz8j" Dec 08 18:02:40 crc kubenswrapper[5118]: I1208 18:02:40.766128 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/736c26bc-8908-4abc-89f5-7f1d201b7e1a-must-gather-output\") pod \"must-gather-5cz8j\" (UID: \"736c26bc-8908-4abc-89f5-7f1d201b7e1a\") " pod="openshift-must-gather-gctth/must-gather-5cz8j" Dec 08 18:02:40 crc kubenswrapper[5118]: I1208 18:02:40.792666 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9qc9\" (UniqueName: \"kubernetes.io/projected/736c26bc-8908-4abc-89f5-7f1d201b7e1a-kube-api-access-z9qc9\") pod \"must-gather-5cz8j\" (UID: \"736c26bc-8908-4abc-89f5-7f1d201b7e1a\") " pod="openshift-must-gather-gctth/must-gather-5cz8j" Dec 08 18:02:40 crc kubenswrapper[5118]: I1208 18:02:40.889178 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gctth/must-gather-5cz8j" Dec 08 18:02:41 crc kubenswrapper[5118]: I1208 18:02:41.312763 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-gctth/must-gather-5cz8j"] Dec 08 18:02:41 crc kubenswrapper[5118]: I1208 18:02:41.575794 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gctth/must-gather-5cz8j" event={"ID":"736c26bc-8908-4abc-89f5-7f1d201b7e1a","Type":"ContainerStarted","Data":"8c2078259dc7954fb8b48e4ad3a36c55973c83a30a621384b0abb7895ef2c2f5"} Dec 08 18:02:46 crc kubenswrapper[5118]: I1208 18:02:46.616308 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gctth/must-gather-5cz8j" event={"ID":"736c26bc-8908-4abc-89f5-7f1d201b7e1a","Type":"ContainerStarted","Data":"9d3704ce8d527f3fa04596959aa3b7160c3b9f5f3211911377a6920f1de11d8b"} Dec 08 18:02:47 crc kubenswrapper[5118]: I1208 18:02:47.626666 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gctth/must-gather-5cz8j" event={"ID":"736c26bc-8908-4abc-89f5-7f1d201b7e1a","Type":"ContainerStarted","Data":"ae16ad5c65263932310921ce0cb3ae2fab8bc690678ac807d442c726789b40d1"} Dec 08 18:02:47 crc kubenswrapper[5118]: I1208 18:02:47.657508 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-gctth/must-gather-5cz8j" podStartSLOduration=2.664902597 podStartE2EDuration="7.657486058s" podCreationTimestamp="2025-12-08 18:02:40 +0000 UTC" firstStartedPulling="2025-12-08 18:02:41.324117248 +0000 UTC m=+1218.225441342" lastFinishedPulling="2025-12-08 18:02:46.316700709 +0000 UTC m=+1223.218024803" observedRunningTime="2025-12-08 18:02:47.650191104 +0000 UTC m=+1224.551515218" watchObservedRunningTime="2025-12-08 18:02:47.657486058 +0000 UTC m=+1224.558810162" Dec 08 18:02:50 crc kubenswrapper[5118]: I1208 18:02:50.545134 5118 ???:1] "http: TLS handshake error from 192.168.126.11:54634: no serving certificate available for the kubelet" Dec 08 18:03:25 crc kubenswrapper[5118]: I1208 18:03:25.355017 5118 ???:1] "http: TLS handshake error from 192.168.126.11:37204: no serving certificate available for the kubelet" Dec 08 18:03:25 crc kubenswrapper[5118]: I1208 18:03:25.516797 5118 ???:1] "http: TLS handshake error from 192.168.126.11:57466: no serving certificate available for the kubelet" Dec 08 18:03:25 crc kubenswrapper[5118]: I1208 18:03:25.538683 5118 ???:1] "http: TLS handshake error from 192.168.126.11:57472: no serving certificate available for the kubelet" Dec 08 18:03:32 crc kubenswrapper[5118]: I1208 18:03:32.392938 5118 ???:1] "http: TLS handshake error from 192.168.126.11:57482: no serving certificate available for the kubelet" Dec 08 18:03:37 crc kubenswrapper[5118]: I1208 18:03:37.468038 5118 ???:1] "http: TLS handshake error from 192.168.126.11:40374: no serving certificate available for the kubelet" Dec 08 18:03:37 crc kubenswrapper[5118]: I1208 18:03:37.639209 5118 ???:1] "http: TLS handshake error from 192.168.126.11:40376: no serving certificate available for the kubelet" Dec 08 18:03:37 crc kubenswrapper[5118]: I1208 18:03:37.665376 5118 ???:1] "http: TLS handshake error from 192.168.126.11:40392: no serving certificate available for the kubelet" Dec 08 18:03:52 crc kubenswrapper[5118]: I1208 18:03:52.055397 5118 ???:1] "http: TLS handshake error from 192.168.126.11:44746: no serving certificate available for the kubelet" Dec 08 18:03:52 crc kubenswrapper[5118]: I1208 18:03:52.197079 5118 ???:1] "http: TLS handshake error from 192.168.126.11:44750: no serving certificate available for the kubelet" Dec 08 18:03:52 crc kubenswrapper[5118]: I1208 18:03:52.219661 5118 ???:1] "http: TLS handshake error from 192.168.126.11:44764: no serving certificate available for the kubelet" Dec 08 18:03:52 crc kubenswrapper[5118]: I1208 18:03:52.234352 5118 ???:1] "http: TLS handshake error from 192.168.126.11:44766: no serving certificate available for the kubelet" Dec 08 18:03:52 crc kubenswrapper[5118]: I1208 18:03:52.381389 5118 ???:1] "http: TLS handshake error from 192.168.126.11:44782: no serving certificate available for the kubelet" Dec 08 18:03:52 crc kubenswrapper[5118]: I1208 18:03:52.396435 5118 ???:1] "http: TLS handshake error from 192.168.126.11:44786: no serving certificate available for the kubelet" Dec 08 18:03:52 crc kubenswrapper[5118]: I1208 18:03:52.415273 5118 ???:1] "http: TLS handshake error from 192.168.126.11:44802: no serving certificate available for the kubelet" Dec 08 18:03:52 crc kubenswrapper[5118]: I1208 18:03:52.556334 5118 ???:1] "http: TLS handshake error from 192.168.126.11:44806: no serving certificate available for the kubelet" Dec 08 18:03:52 crc kubenswrapper[5118]: I1208 18:03:52.668661 5118 ???:1] "http: TLS handshake error from 192.168.126.11:44822: no serving certificate available for the kubelet" Dec 08 18:03:52 crc kubenswrapper[5118]: I1208 18:03:52.713375 5118 ???:1] "http: TLS handshake error from 192.168.126.11:44826: no serving certificate available for the kubelet" Dec 08 18:03:52 crc kubenswrapper[5118]: I1208 18:03:52.728051 5118 ???:1] "http: TLS handshake error from 192.168.126.11:44842: no serving certificate available for the kubelet" Dec 08 18:03:52 crc kubenswrapper[5118]: I1208 18:03:52.879616 5118 ???:1] "http: TLS handshake error from 192.168.126.11:44848: no serving certificate available for the kubelet" Dec 08 18:03:52 crc kubenswrapper[5118]: I1208 18:03:52.913173 5118 ???:1] "http: TLS handshake error from 192.168.126.11:44864: no serving certificate available for the kubelet" Dec 08 18:03:52 crc kubenswrapper[5118]: I1208 18:03:52.936084 5118 ???:1] "http: TLS handshake error from 192.168.126.11:44880: no serving certificate available for the kubelet" Dec 08 18:03:53 crc kubenswrapper[5118]: I1208 18:03:53.058079 5118 ???:1] "http: TLS handshake error from 192.168.126.11:44888: no serving certificate available for the kubelet" Dec 08 18:03:53 crc kubenswrapper[5118]: I1208 18:03:53.190161 5118 ???:1] "http: TLS handshake error from 192.168.126.11:44904: no serving certificate available for the kubelet" Dec 08 18:03:53 crc kubenswrapper[5118]: I1208 18:03:53.193979 5118 ???:1] "http: TLS handshake error from 192.168.126.11:44918: no serving certificate available for the kubelet" Dec 08 18:03:53 crc kubenswrapper[5118]: I1208 18:03:53.251580 5118 ???:1] "http: TLS handshake error from 192.168.126.11:44932: no serving certificate available for the kubelet" Dec 08 18:03:53 crc kubenswrapper[5118]: I1208 18:03:53.410431 5118 ???:1] "http: TLS handshake error from 192.168.126.11:44934: no serving certificate available for the kubelet" Dec 08 18:03:53 crc kubenswrapper[5118]: I1208 18:03:53.412666 5118 ???:1] "http: TLS handshake error from 192.168.126.11:44942: no serving certificate available for the kubelet" Dec 08 18:03:53 crc kubenswrapper[5118]: I1208 18:03:53.453438 5118 ???:1] "http: TLS handshake error from 192.168.126.11:44958: no serving certificate available for the kubelet" Dec 08 18:03:53 crc kubenswrapper[5118]: I1208 18:03:53.592871 5118 ???:1] "http: TLS handshake error from 192.168.126.11:44972: no serving certificate available for the kubelet" Dec 08 18:03:53 crc kubenswrapper[5118]: I1208 18:03:53.731202 5118 ???:1] "http: TLS handshake error from 192.168.126.11:44984: no serving certificate available for the kubelet" Dec 08 18:03:53 crc kubenswrapper[5118]: I1208 18:03:53.757017 5118 ???:1] "http: TLS handshake error from 192.168.126.11:44994: no serving certificate available for the kubelet" Dec 08 18:03:53 crc kubenswrapper[5118]: I1208 18:03:53.772272 5118 ???:1] "http: TLS handshake error from 192.168.126.11:44998: no serving certificate available for the kubelet" Dec 08 18:03:53 crc kubenswrapper[5118]: I1208 18:03:53.893940 5118 ???:1] "http: TLS handshake error from 192.168.126.11:45012: no serving certificate available for the kubelet" Dec 08 18:03:53 crc kubenswrapper[5118]: I1208 18:03:53.894799 5118 ???:1] "http: TLS handshake error from 192.168.126.11:45020: no serving certificate available for the kubelet" Dec 08 18:03:53 crc kubenswrapper[5118]: I1208 18:03:53.927367 5118 ???:1] "http: TLS handshake error from 192.168.126.11:45026: no serving certificate available for the kubelet" Dec 08 18:03:54 crc kubenswrapper[5118]: I1208 18:03:54.067316 5118 ???:1] "http: TLS handshake error from 192.168.126.11:45032: no serving certificate available for the kubelet" Dec 08 18:03:54 crc kubenswrapper[5118]: I1208 18:03:54.250392 5118 ???:1] "http: TLS handshake error from 192.168.126.11:45034: no serving certificate available for the kubelet" Dec 08 18:03:54 crc kubenswrapper[5118]: I1208 18:03:54.251758 5118 ???:1] "http: TLS handshake error from 192.168.126.11:45050: no serving certificate available for the kubelet" Dec 08 18:03:54 crc kubenswrapper[5118]: I1208 18:03:54.289077 5118 ???:1] "http: TLS handshake error from 192.168.126.11:45054: no serving certificate available for the kubelet" Dec 08 18:03:54 crc kubenswrapper[5118]: I1208 18:03:54.459928 5118 ???:1] "http: TLS handshake error from 192.168.126.11:45058: no serving certificate available for the kubelet" Dec 08 18:03:54 crc kubenswrapper[5118]: I1208 18:03:54.477102 5118 ???:1] "http: TLS handshake error from 192.168.126.11:45074: no serving certificate available for the kubelet" Dec 08 18:03:54 crc kubenswrapper[5118]: I1208 18:03:54.500445 5118 ???:1] "http: TLS handshake error from 192.168.126.11:45078: no serving certificate available for the kubelet" Dec 08 18:03:54 crc kubenswrapper[5118]: I1208 18:03:54.611078 5118 ???:1] "http: TLS handshake error from 192.168.126.11:45092: no serving certificate available for the kubelet" Dec 08 18:03:54 crc kubenswrapper[5118]: I1208 18:03:54.761207 5118 ???:1] "http: TLS handshake error from 192.168.126.11:45108: no serving certificate available for the kubelet" Dec 08 18:03:54 crc kubenswrapper[5118]: I1208 18:03:54.762765 5118 ???:1] "http: TLS handshake error from 192.168.126.11:45122: no serving certificate available for the kubelet" Dec 08 18:03:54 crc kubenswrapper[5118]: I1208 18:03:54.776088 5118 ???:1] "http: TLS handshake error from 192.168.126.11:45138: no serving certificate available for the kubelet" Dec 08 18:03:54 crc kubenswrapper[5118]: I1208 18:03:54.933144 5118 ???:1] "http: TLS handshake error from 192.168.126.11:45142: no serving certificate available for the kubelet" Dec 08 18:03:54 crc kubenswrapper[5118]: I1208 18:03:54.959007 5118 ???:1] "http: TLS handshake error from 192.168.126.11:45146: no serving certificate available for the kubelet" Dec 08 18:03:54 crc kubenswrapper[5118]: I1208 18:03:54.961506 5118 ???:1] "http: TLS handshake error from 192.168.126.11:45150: no serving certificate available for the kubelet" Dec 08 18:03:54 crc kubenswrapper[5118]: I1208 18:03:54.962390 5118 ???:1] "http: TLS handshake error from 192.168.126.11:45166: no serving certificate available for the kubelet" Dec 08 18:03:55 crc kubenswrapper[5118]: I1208 18:03:55.120150 5118 ???:1] "http: TLS handshake error from 192.168.126.11:45174: no serving certificate available for the kubelet" Dec 08 18:03:55 crc kubenswrapper[5118]: I1208 18:03:55.276022 5118 ???:1] "http: TLS handshake error from 192.168.126.11:45186: no serving certificate available for the kubelet" Dec 08 18:03:55 crc kubenswrapper[5118]: I1208 18:03:55.280812 5118 ???:1] "http: TLS handshake error from 192.168.126.11:45188: no serving certificate available for the kubelet" Dec 08 18:03:55 crc kubenswrapper[5118]: I1208 18:03:55.283994 5118 ???:1] "http: TLS handshake error from 192.168.126.11:45202: no serving certificate available for the kubelet" Dec 08 18:03:55 crc kubenswrapper[5118]: I1208 18:03:55.415303 5118 ???:1] "http: TLS handshake error from 192.168.126.11:45212: no serving certificate available for the kubelet" Dec 08 18:03:55 crc kubenswrapper[5118]: I1208 18:03:55.439913 5118 ???:1] "http: TLS handshake error from 192.168.126.11:45218: no serving certificate available for the kubelet" Dec 08 18:03:55 crc kubenswrapper[5118]: I1208 18:03:55.468165 5118 ???:1] "http: TLS handshake error from 192.168.126.11:45234: no serving certificate available for the kubelet" Dec 08 18:04:01 crc kubenswrapper[5118]: I1208 18:04:01.961945 5118 patch_prober.go:28] interesting pod/machine-config-daemon-8vxnt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 18:04:01 crc kubenswrapper[5118]: I1208 18:04:01.962408 5118 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8vxnt" podUID="cee6a3dc-47d4-4996-9c78-cb6c6b626d71" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 18:04:07 crc kubenswrapper[5118]: I1208 18:04:07.131036 5118 ???:1] "http: TLS handshake error from 192.168.126.11:55084: no serving certificate available for the kubelet" Dec 08 18:04:07 crc kubenswrapper[5118]: I1208 18:04:07.268683 5118 ???:1] "http: TLS handshake error from 192.168.126.11:55090: no serving certificate available for the kubelet" Dec 08 18:04:07 crc kubenswrapper[5118]: I1208 18:04:07.303912 5118 ???:1] "http: TLS handshake error from 192.168.126.11:55098: no serving certificate available for the kubelet" Dec 08 18:04:07 crc kubenswrapper[5118]: I1208 18:04:07.439805 5118 ???:1] "http: TLS handshake error from 192.168.126.11:55110: no serving certificate available for the kubelet" Dec 08 18:04:07 crc kubenswrapper[5118]: I1208 18:04:07.476401 5118 ???:1] "http: TLS handshake error from 192.168.126.11:55118: no serving certificate available for the kubelet" Dec 08 18:04:31 crc kubenswrapper[5118]: I1208 18:04:31.737491 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-p8pz8"] Dec 08 18:04:31 crc kubenswrapper[5118]: I1208 18:04:31.745384 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p8pz8" Dec 08 18:04:31 crc kubenswrapper[5118]: I1208 18:04:31.756757 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-p8pz8"] Dec 08 18:04:31 crc kubenswrapper[5118]: I1208 18:04:31.854507 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a2de420a-ccef-431d-8597-193d09e4fa4f-catalog-content\") pod \"certified-operators-p8pz8\" (UID: \"a2de420a-ccef-431d-8597-193d09e4fa4f\") " pod="openshift-marketplace/certified-operators-p8pz8" Dec 08 18:04:31 crc kubenswrapper[5118]: I1208 18:04:31.854587 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a2de420a-ccef-431d-8597-193d09e4fa4f-utilities\") pod \"certified-operators-p8pz8\" (UID: \"a2de420a-ccef-431d-8597-193d09e4fa4f\") " pod="openshift-marketplace/certified-operators-p8pz8" Dec 08 18:04:31 crc kubenswrapper[5118]: I1208 18:04:31.854710 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6d9d\" (UniqueName: \"kubernetes.io/projected/a2de420a-ccef-431d-8597-193d09e4fa4f-kube-api-access-l6d9d\") pod \"certified-operators-p8pz8\" (UID: \"a2de420a-ccef-431d-8597-193d09e4fa4f\") " pod="openshift-marketplace/certified-operators-p8pz8" Dec 08 18:04:31 crc kubenswrapper[5118]: I1208 18:04:31.956372 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l6d9d\" (UniqueName: \"kubernetes.io/projected/a2de420a-ccef-431d-8597-193d09e4fa4f-kube-api-access-l6d9d\") pod \"certified-operators-p8pz8\" (UID: \"a2de420a-ccef-431d-8597-193d09e4fa4f\") " pod="openshift-marketplace/certified-operators-p8pz8" Dec 08 18:04:31 crc kubenswrapper[5118]: I1208 18:04:31.956501 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a2de420a-ccef-431d-8597-193d09e4fa4f-catalog-content\") pod \"certified-operators-p8pz8\" (UID: \"a2de420a-ccef-431d-8597-193d09e4fa4f\") " pod="openshift-marketplace/certified-operators-p8pz8" Dec 08 18:04:31 crc kubenswrapper[5118]: I1208 18:04:31.956542 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a2de420a-ccef-431d-8597-193d09e4fa4f-utilities\") pod \"certified-operators-p8pz8\" (UID: \"a2de420a-ccef-431d-8597-193d09e4fa4f\") " pod="openshift-marketplace/certified-operators-p8pz8" Dec 08 18:04:31 crc kubenswrapper[5118]: I1208 18:04:31.957091 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a2de420a-ccef-431d-8597-193d09e4fa4f-utilities\") pod \"certified-operators-p8pz8\" (UID: \"a2de420a-ccef-431d-8597-193d09e4fa4f\") " pod="openshift-marketplace/certified-operators-p8pz8" Dec 08 18:04:31 crc kubenswrapper[5118]: I1208 18:04:31.957223 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a2de420a-ccef-431d-8597-193d09e4fa4f-catalog-content\") pod \"certified-operators-p8pz8\" (UID: \"a2de420a-ccef-431d-8597-193d09e4fa4f\") " pod="openshift-marketplace/certified-operators-p8pz8" Dec 08 18:04:31 crc kubenswrapper[5118]: I1208 18:04:31.962152 5118 patch_prober.go:28] interesting pod/machine-config-daemon-8vxnt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 18:04:31 crc kubenswrapper[5118]: I1208 18:04:31.962227 5118 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8vxnt" podUID="cee6a3dc-47d4-4996-9c78-cb6c6b626d71" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 18:04:31 crc kubenswrapper[5118]: I1208 18:04:31.995377 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6d9d\" (UniqueName: \"kubernetes.io/projected/a2de420a-ccef-431d-8597-193d09e4fa4f-kube-api-access-l6d9d\") pod \"certified-operators-p8pz8\" (UID: \"a2de420a-ccef-431d-8597-193d09e4fa4f\") " pod="openshift-marketplace/certified-operators-p8pz8" Dec 08 18:04:32 crc kubenswrapper[5118]: I1208 18:04:32.081629 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p8pz8" Dec 08 18:04:32 crc kubenswrapper[5118]: I1208 18:04:32.556219 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-p8pz8"] Dec 08 18:04:32 crc kubenswrapper[5118]: I1208 18:04:32.567596 5118 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 08 18:04:33 crc kubenswrapper[5118]: I1208 18:04:33.475495 5118 generic.go:358] "Generic (PLEG): container finished" podID="a2de420a-ccef-431d-8597-193d09e4fa4f" containerID="89f2ec736d6701c19f08aabe29f7aa3d8a35e5ea76415839c12f86be5277a5a0" exitCode=0 Dec 08 18:04:33 crc kubenswrapper[5118]: I1208 18:04:33.475585 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p8pz8" event={"ID":"a2de420a-ccef-431d-8597-193d09e4fa4f","Type":"ContainerDied","Data":"89f2ec736d6701c19f08aabe29f7aa3d8a35e5ea76415839c12f86be5277a5a0"} Dec 08 18:04:33 crc kubenswrapper[5118]: I1208 18:04:33.475626 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p8pz8" event={"ID":"a2de420a-ccef-431d-8597-193d09e4fa4f","Type":"ContainerStarted","Data":"656c976a128194bbccaf372ab604816f935ff61f3b5270bd6d2cdd60be6c9c8a"} Dec 08 18:04:34 crc kubenswrapper[5118]: I1208 18:04:34.485475 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p8pz8" event={"ID":"a2de420a-ccef-431d-8597-193d09e4fa4f","Type":"ContainerStarted","Data":"4b5a2a6fe933e02b8004616be5e0911bfed6ed98b461b7655c6784099b6966a9"} Dec 08 18:04:35 crc kubenswrapper[5118]: I1208 18:04:35.497952 5118 generic.go:358] "Generic (PLEG): container finished" podID="a2de420a-ccef-431d-8597-193d09e4fa4f" containerID="4b5a2a6fe933e02b8004616be5e0911bfed6ed98b461b7655c6784099b6966a9" exitCode=0 Dec 08 18:04:35 crc kubenswrapper[5118]: I1208 18:04:35.498110 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p8pz8" event={"ID":"a2de420a-ccef-431d-8597-193d09e4fa4f","Type":"ContainerDied","Data":"4b5a2a6fe933e02b8004616be5e0911bfed6ed98b461b7655c6784099b6966a9"} Dec 08 18:04:36 crc kubenswrapper[5118]: I1208 18:04:36.508699 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p8pz8" event={"ID":"a2de420a-ccef-431d-8597-193d09e4fa4f","Type":"ContainerStarted","Data":"ef62b410e58ac6eb3ff275005f7c13407567bea8556661bfe34f48b62a3e4927"} Dec 08 18:04:40 crc kubenswrapper[5118]: I1208 18:04:40.584379 5118 ???:1] "http: TLS handshake error from 192.168.126.11:33458: no serving certificate available for the kubelet" Dec 08 18:04:42 crc kubenswrapper[5118]: I1208 18:04:42.081797 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-p8pz8" Dec 08 18:04:42 crc kubenswrapper[5118]: I1208 18:04:42.083389 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-p8pz8" Dec 08 18:04:42 crc kubenswrapper[5118]: I1208 18:04:42.144044 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-p8pz8" Dec 08 18:04:42 crc kubenswrapper[5118]: I1208 18:04:42.168973 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-p8pz8" podStartSLOduration=10.412639422 podStartE2EDuration="11.168953909s" podCreationTimestamp="2025-12-08 18:04:31 +0000 UTC" firstStartedPulling="2025-12-08 18:04:33.476553835 +0000 UTC m=+1330.377877929" lastFinishedPulling="2025-12-08 18:04:34.232868322 +0000 UTC m=+1331.134192416" observedRunningTime="2025-12-08 18:04:36.537652886 +0000 UTC m=+1333.438977020" watchObservedRunningTime="2025-12-08 18:04:42.168953909 +0000 UTC m=+1339.070278013" Dec 08 18:04:42 crc kubenswrapper[5118]: I1208 18:04:42.457957 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-5gtms"] Dec 08 18:04:42 crc kubenswrapper[5118]: I1208 18:04:42.964487 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5gtms"] Dec 08 18:04:42 crc kubenswrapper[5118]: I1208 18:04:42.964981 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5gtms" Dec 08 18:04:43 crc kubenswrapper[5118]: I1208 18:04:43.030203 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-p8pz8" Dec 08 18:04:43 crc kubenswrapper[5118]: I1208 18:04:43.033422 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwz69\" (UniqueName: \"kubernetes.io/projected/a92e71c9-7ef3-42ef-a103-b8cb38fd3ee5-kube-api-access-bwz69\") pod \"redhat-operators-5gtms\" (UID: \"a92e71c9-7ef3-42ef-a103-b8cb38fd3ee5\") " pod="openshift-marketplace/redhat-operators-5gtms" Dec 08 18:04:43 crc kubenswrapper[5118]: I1208 18:04:43.033594 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a92e71c9-7ef3-42ef-a103-b8cb38fd3ee5-utilities\") pod \"redhat-operators-5gtms\" (UID: \"a92e71c9-7ef3-42ef-a103-b8cb38fd3ee5\") " pod="openshift-marketplace/redhat-operators-5gtms" Dec 08 18:04:43 crc kubenswrapper[5118]: I1208 18:04:43.033635 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a92e71c9-7ef3-42ef-a103-b8cb38fd3ee5-catalog-content\") pod \"redhat-operators-5gtms\" (UID: \"a92e71c9-7ef3-42ef-a103-b8cb38fd3ee5\") " pod="openshift-marketplace/redhat-operators-5gtms" Dec 08 18:04:43 crc kubenswrapper[5118]: I1208 18:04:43.135050 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a92e71c9-7ef3-42ef-a103-b8cb38fd3ee5-utilities\") pod \"redhat-operators-5gtms\" (UID: \"a92e71c9-7ef3-42ef-a103-b8cb38fd3ee5\") " pod="openshift-marketplace/redhat-operators-5gtms" Dec 08 18:04:43 crc kubenswrapper[5118]: I1208 18:04:43.135115 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a92e71c9-7ef3-42ef-a103-b8cb38fd3ee5-catalog-content\") pod \"redhat-operators-5gtms\" (UID: \"a92e71c9-7ef3-42ef-a103-b8cb38fd3ee5\") " pod="openshift-marketplace/redhat-operators-5gtms" Dec 08 18:04:43 crc kubenswrapper[5118]: I1208 18:04:43.135278 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bwz69\" (UniqueName: \"kubernetes.io/projected/a92e71c9-7ef3-42ef-a103-b8cb38fd3ee5-kube-api-access-bwz69\") pod \"redhat-operators-5gtms\" (UID: \"a92e71c9-7ef3-42ef-a103-b8cb38fd3ee5\") " pod="openshift-marketplace/redhat-operators-5gtms" Dec 08 18:04:43 crc kubenswrapper[5118]: I1208 18:04:43.135866 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a92e71c9-7ef3-42ef-a103-b8cb38fd3ee5-utilities\") pod \"redhat-operators-5gtms\" (UID: \"a92e71c9-7ef3-42ef-a103-b8cb38fd3ee5\") " pod="openshift-marketplace/redhat-operators-5gtms" Dec 08 18:04:43 crc kubenswrapper[5118]: I1208 18:04:43.135999 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a92e71c9-7ef3-42ef-a103-b8cb38fd3ee5-catalog-content\") pod \"redhat-operators-5gtms\" (UID: \"a92e71c9-7ef3-42ef-a103-b8cb38fd3ee5\") " pod="openshift-marketplace/redhat-operators-5gtms" Dec 08 18:04:43 crc kubenswrapper[5118]: I1208 18:04:43.165484 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwz69\" (UniqueName: \"kubernetes.io/projected/a92e71c9-7ef3-42ef-a103-b8cb38fd3ee5-kube-api-access-bwz69\") pod \"redhat-operators-5gtms\" (UID: \"a92e71c9-7ef3-42ef-a103-b8cb38fd3ee5\") " pod="openshift-marketplace/redhat-operators-5gtms" Dec 08 18:04:43 crc kubenswrapper[5118]: I1208 18:04:43.311177 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5gtms" Dec 08 18:04:43 crc kubenswrapper[5118]: I1208 18:04:43.544902 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5gtms"] Dec 08 18:04:43 crc kubenswrapper[5118]: I1208 18:04:43.566986 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5gtms" event={"ID":"a92e71c9-7ef3-42ef-a103-b8cb38fd3ee5","Type":"ContainerStarted","Data":"b2ae2dd42072206957faae07a63052ff3899d34cdfbee5aa2b6610adac5d0916"} Dec 08 18:04:44 crc kubenswrapper[5118]: I1208 18:04:44.577303 5118 generic.go:358] "Generic (PLEG): container finished" podID="a92e71c9-7ef3-42ef-a103-b8cb38fd3ee5" containerID="346aa86f1a5ecc597aecaf8c0a5faee11bcdd8f0c1a059132cec9ee800ba048d" exitCode=0 Dec 08 18:04:44 crc kubenswrapper[5118]: I1208 18:04:44.577376 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5gtms" event={"ID":"a92e71c9-7ef3-42ef-a103-b8cb38fd3ee5","Type":"ContainerDied","Data":"346aa86f1a5ecc597aecaf8c0a5faee11bcdd8f0c1a059132cec9ee800ba048d"} Dec 08 18:04:44 crc kubenswrapper[5118]: I1208 18:04:44.581080 5118 generic.go:358] "Generic (PLEG): container finished" podID="736c26bc-8908-4abc-89f5-7f1d201b7e1a" containerID="9d3704ce8d527f3fa04596959aa3b7160c3b9f5f3211911377a6920f1de11d8b" exitCode=0 Dec 08 18:04:44 crc kubenswrapper[5118]: I1208 18:04:44.581188 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gctth/must-gather-5cz8j" event={"ID":"736c26bc-8908-4abc-89f5-7f1d201b7e1a","Type":"ContainerDied","Data":"9d3704ce8d527f3fa04596959aa3b7160c3b9f5f3211911377a6920f1de11d8b"} Dec 08 18:04:44 crc kubenswrapper[5118]: I1208 18:04:44.581834 5118 scope.go:117] "RemoveContainer" containerID="9d3704ce8d527f3fa04596959aa3b7160c3b9f5f3211911377a6920f1de11d8b" Dec 08 18:04:45 crc kubenswrapper[5118]: I1208 18:04:45.253648 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-p8pz8"] Dec 08 18:04:45 crc kubenswrapper[5118]: I1208 18:04:45.588623 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-p8pz8" podUID="a2de420a-ccef-431d-8597-193d09e4fa4f" containerName="registry-server" containerID="cri-o://ef62b410e58ac6eb3ff275005f7c13407567bea8556661bfe34f48b62a3e4927" gracePeriod=2 Dec 08 18:04:46 crc kubenswrapper[5118]: I1208 18:04:46.027462 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p8pz8" Dec 08 18:04:46 crc kubenswrapper[5118]: I1208 18:04:46.080514 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a2de420a-ccef-431d-8597-193d09e4fa4f-catalog-content\") pod \"a2de420a-ccef-431d-8597-193d09e4fa4f\" (UID: \"a2de420a-ccef-431d-8597-193d09e4fa4f\") " Dec 08 18:04:46 crc kubenswrapper[5118]: I1208 18:04:46.080616 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l6d9d\" (UniqueName: \"kubernetes.io/projected/a2de420a-ccef-431d-8597-193d09e4fa4f-kube-api-access-l6d9d\") pod \"a2de420a-ccef-431d-8597-193d09e4fa4f\" (UID: \"a2de420a-ccef-431d-8597-193d09e4fa4f\") " Dec 08 18:04:46 crc kubenswrapper[5118]: I1208 18:04:46.080657 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a2de420a-ccef-431d-8597-193d09e4fa4f-utilities\") pod \"a2de420a-ccef-431d-8597-193d09e4fa4f\" (UID: \"a2de420a-ccef-431d-8597-193d09e4fa4f\") " Dec 08 18:04:46 crc kubenswrapper[5118]: I1208 18:04:46.081950 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a2de420a-ccef-431d-8597-193d09e4fa4f-utilities" (OuterVolumeSpecName: "utilities") pod "a2de420a-ccef-431d-8597-193d09e4fa4f" (UID: "a2de420a-ccef-431d-8597-193d09e4fa4f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 18:04:46 crc kubenswrapper[5118]: I1208 18:04:46.087836 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2de420a-ccef-431d-8597-193d09e4fa4f-kube-api-access-l6d9d" (OuterVolumeSpecName: "kube-api-access-l6d9d") pod "a2de420a-ccef-431d-8597-193d09e4fa4f" (UID: "a2de420a-ccef-431d-8597-193d09e4fa4f"). InnerVolumeSpecName "kube-api-access-l6d9d". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:04:46 crc kubenswrapper[5118]: I1208 18:04:46.121219 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a2de420a-ccef-431d-8597-193d09e4fa4f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a2de420a-ccef-431d-8597-193d09e4fa4f" (UID: "a2de420a-ccef-431d-8597-193d09e4fa4f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 18:04:46 crc kubenswrapper[5118]: I1208 18:04:46.182186 5118 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a2de420a-ccef-431d-8597-193d09e4fa4f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 18:04:46 crc kubenswrapper[5118]: I1208 18:04:46.182224 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l6d9d\" (UniqueName: \"kubernetes.io/projected/a2de420a-ccef-431d-8597-193d09e4fa4f-kube-api-access-l6d9d\") on node \"crc\" DevicePath \"\"" Dec 08 18:04:46 crc kubenswrapper[5118]: I1208 18:04:46.182238 5118 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a2de420a-ccef-431d-8597-193d09e4fa4f-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 18:04:46 crc kubenswrapper[5118]: I1208 18:04:46.603255 5118 generic.go:358] "Generic (PLEG): container finished" podID="a2de420a-ccef-431d-8597-193d09e4fa4f" containerID="ef62b410e58ac6eb3ff275005f7c13407567bea8556661bfe34f48b62a3e4927" exitCode=0 Dec 08 18:04:46 crc kubenswrapper[5118]: I1208 18:04:46.603345 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p8pz8" event={"ID":"a2de420a-ccef-431d-8597-193d09e4fa4f","Type":"ContainerDied","Data":"ef62b410e58ac6eb3ff275005f7c13407567bea8556661bfe34f48b62a3e4927"} Dec 08 18:04:46 crc kubenswrapper[5118]: I1208 18:04:46.603928 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p8pz8" event={"ID":"a2de420a-ccef-431d-8597-193d09e4fa4f","Type":"ContainerDied","Data":"656c976a128194bbccaf372ab604816f935ff61f3b5270bd6d2cdd60be6c9c8a"} Dec 08 18:04:46 crc kubenswrapper[5118]: I1208 18:04:46.603960 5118 scope.go:117] "RemoveContainer" containerID="ef62b410e58ac6eb3ff275005f7c13407567bea8556661bfe34f48b62a3e4927" Dec 08 18:04:46 crc kubenswrapper[5118]: I1208 18:04:46.603505 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p8pz8" Dec 08 18:04:46 crc kubenswrapper[5118]: I1208 18:04:46.647294 5118 scope.go:117] "RemoveContainer" containerID="4b5a2a6fe933e02b8004616be5e0911bfed6ed98b461b7655c6784099b6966a9" Dec 08 18:04:46 crc kubenswrapper[5118]: I1208 18:04:46.650180 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-p8pz8"] Dec 08 18:04:46 crc kubenswrapper[5118]: I1208 18:04:46.654299 5118 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-p8pz8"] Dec 08 18:04:46 crc kubenswrapper[5118]: I1208 18:04:46.683646 5118 scope.go:117] "RemoveContainer" containerID="89f2ec736d6701c19f08aabe29f7aa3d8a35e5ea76415839c12f86be5277a5a0" Dec 08 18:04:46 crc kubenswrapper[5118]: I1208 18:04:46.703251 5118 scope.go:117] "RemoveContainer" containerID="ef62b410e58ac6eb3ff275005f7c13407567bea8556661bfe34f48b62a3e4927" Dec 08 18:04:46 crc kubenswrapper[5118]: E1208 18:04:46.703810 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef62b410e58ac6eb3ff275005f7c13407567bea8556661bfe34f48b62a3e4927\": container with ID starting with ef62b410e58ac6eb3ff275005f7c13407567bea8556661bfe34f48b62a3e4927 not found: ID does not exist" containerID="ef62b410e58ac6eb3ff275005f7c13407567bea8556661bfe34f48b62a3e4927" Dec 08 18:04:46 crc kubenswrapper[5118]: I1208 18:04:46.703847 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef62b410e58ac6eb3ff275005f7c13407567bea8556661bfe34f48b62a3e4927"} err="failed to get container status \"ef62b410e58ac6eb3ff275005f7c13407567bea8556661bfe34f48b62a3e4927\": rpc error: code = NotFound desc = could not find container \"ef62b410e58ac6eb3ff275005f7c13407567bea8556661bfe34f48b62a3e4927\": container with ID starting with ef62b410e58ac6eb3ff275005f7c13407567bea8556661bfe34f48b62a3e4927 not found: ID does not exist" Dec 08 18:04:46 crc kubenswrapper[5118]: I1208 18:04:46.703893 5118 scope.go:117] "RemoveContainer" containerID="4b5a2a6fe933e02b8004616be5e0911bfed6ed98b461b7655c6784099b6966a9" Dec 08 18:04:46 crc kubenswrapper[5118]: E1208 18:04:46.704231 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b5a2a6fe933e02b8004616be5e0911bfed6ed98b461b7655c6784099b6966a9\": container with ID starting with 4b5a2a6fe933e02b8004616be5e0911bfed6ed98b461b7655c6784099b6966a9 not found: ID does not exist" containerID="4b5a2a6fe933e02b8004616be5e0911bfed6ed98b461b7655c6784099b6966a9" Dec 08 18:04:46 crc kubenswrapper[5118]: I1208 18:04:46.704258 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b5a2a6fe933e02b8004616be5e0911bfed6ed98b461b7655c6784099b6966a9"} err="failed to get container status \"4b5a2a6fe933e02b8004616be5e0911bfed6ed98b461b7655c6784099b6966a9\": rpc error: code = NotFound desc = could not find container \"4b5a2a6fe933e02b8004616be5e0911bfed6ed98b461b7655c6784099b6966a9\": container with ID starting with 4b5a2a6fe933e02b8004616be5e0911bfed6ed98b461b7655c6784099b6966a9 not found: ID does not exist" Dec 08 18:04:46 crc kubenswrapper[5118]: I1208 18:04:46.704275 5118 scope.go:117] "RemoveContainer" containerID="89f2ec736d6701c19f08aabe29f7aa3d8a35e5ea76415839c12f86be5277a5a0" Dec 08 18:04:46 crc kubenswrapper[5118]: E1208 18:04:46.704505 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89f2ec736d6701c19f08aabe29f7aa3d8a35e5ea76415839c12f86be5277a5a0\": container with ID starting with 89f2ec736d6701c19f08aabe29f7aa3d8a35e5ea76415839c12f86be5277a5a0 not found: ID does not exist" containerID="89f2ec736d6701c19f08aabe29f7aa3d8a35e5ea76415839c12f86be5277a5a0" Dec 08 18:04:46 crc kubenswrapper[5118]: I1208 18:04:46.704531 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89f2ec736d6701c19f08aabe29f7aa3d8a35e5ea76415839c12f86be5277a5a0"} err="failed to get container status \"89f2ec736d6701c19f08aabe29f7aa3d8a35e5ea76415839c12f86be5277a5a0\": rpc error: code = NotFound desc = could not find container \"89f2ec736d6701c19f08aabe29f7aa3d8a35e5ea76415839c12f86be5277a5a0\": container with ID starting with 89f2ec736d6701c19f08aabe29f7aa3d8a35e5ea76415839c12f86be5277a5a0 not found: ID does not exist" Dec 08 18:04:47 crc kubenswrapper[5118]: I1208 18:04:47.436598 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2de420a-ccef-431d-8597-193d09e4fa4f" path="/var/lib/kubelet/pods/a2de420a-ccef-431d-8597-193d09e4fa4f/volumes" Dec 08 18:04:50 crc kubenswrapper[5118]: I1208 18:04:50.559279 5118 ???:1] "http: TLS handshake error from 192.168.126.11:60752: no serving certificate available for the kubelet" Dec 08 18:04:50 crc kubenswrapper[5118]: I1208 18:04:50.716297 5118 ???:1] "http: TLS handshake error from 192.168.126.11:60766: no serving certificate available for the kubelet" Dec 08 18:04:50 crc kubenswrapper[5118]: I1208 18:04:50.729578 5118 ???:1] "http: TLS handshake error from 192.168.126.11:60776: no serving certificate available for the kubelet" Dec 08 18:04:50 crc kubenswrapper[5118]: I1208 18:04:50.755446 5118 ???:1] "http: TLS handshake error from 192.168.126.11:60792: no serving certificate available for the kubelet" Dec 08 18:04:50 crc kubenswrapper[5118]: I1208 18:04:50.764247 5118 ???:1] "http: TLS handshake error from 192.168.126.11:60800: no serving certificate available for the kubelet" Dec 08 18:04:50 crc kubenswrapper[5118]: I1208 18:04:50.776749 5118 ???:1] "http: TLS handshake error from 192.168.126.11:60808: no serving certificate available for the kubelet" Dec 08 18:04:50 crc kubenswrapper[5118]: I1208 18:04:50.789050 5118 ???:1] "http: TLS handshake error from 192.168.126.11:60820: no serving certificate available for the kubelet" Dec 08 18:04:50 crc kubenswrapper[5118]: I1208 18:04:50.806517 5118 ???:1] "http: TLS handshake error from 192.168.126.11:60834: no serving certificate available for the kubelet" Dec 08 18:04:50 crc kubenswrapper[5118]: I1208 18:04:50.820177 5118 ???:1] "http: TLS handshake error from 192.168.126.11:60840: no serving certificate available for the kubelet" Dec 08 18:04:50 crc kubenswrapper[5118]: I1208 18:04:50.965324 5118 ???:1] "http: TLS handshake error from 192.168.126.11:60854: no serving certificate available for the kubelet" Dec 08 18:04:50 crc kubenswrapper[5118]: I1208 18:04:50.978285 5118 ???:1] "http: TLS handshake error from 192.168.126.11:60868: no serving certificate available for the kubelet" Dec 08 18:04:51 crc kubenswrapper[5118]: I1208 18:04:51.001564 5118 ???:1] "http: TLS handshake error from 192.168.126.11:60878: no serving certificate available for the kubelet" Dec 08 18:04:51 crc kubenswrapper[5118]: I1208 18:04:51.012168 5118 ???:1] "http: TLS handshake error from 192.168.126.11:60884: no serving certificate available for the kubelet" Dec 08 18:04:51 crc kubenswrapper[5118]: I1208 18:04:51.026897 5118 ???:1] "http: TLS handshake error from 192.168.126.11:60900: no serving certificate available for the kubelet" Dec 08 18:04:51 crc kubenswrapper[5118]: I1208 18:04:51.040484 5118 ???:1] "http: TLS handshake error from 192.168.126.11:60908: no serving certificate available for the kubelet" Dec 08 18:04:51 crc kubenswrapper[5118]: I1208 18:04:51.058154 5118 ???:1] "http: TLS handshake error from 192.168.126.11:60920: no serving certificate available for the kubelet" Dec 08 18:04:51 crc kubenswrapper[5118]: I1208 18:04:51.071319 5118 ???:1] "http: TLS handshake error from 192.168.126.11:60928: no serving certificate available for the kubelet" Dec 08 18:04:51 crc kubenswrapper[5118]: I1208 18:04:51.642277 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5gtms" event={"ID":"a92e71c9-7ef3-42ef-a103-b8cb38fd3ee5","Type":"ContainerStarted","Data":"49e870ea2df47c64cdb0c3dcad43af8899dbec3b29fbec745d6efc987a081643"} Dec 08 18:04:52 crc kubenswrapper[5118]: I1208 18:04:52.651152 5118 generic.go:358] "Generic (PLEG): container finished" podID="a92e71c9-7ef3-42ef-a103-b8cb38fd3ee5" containerID="49e870ea2df47c64cdb0c3dcad43af8899dbec3b29fbec745d6efc987a081643" exitCode=0 Dec 08 18:04:52 crc kubenswrapper[5118]: I1208 18:04:52.651439 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5gtms" event={"ID":"a92e71c9-7ef3-42ef-a103-b8cb38fd3ee5","Type":"ContainerDied","Data":"49e870ea2df47c64cdb0c3dcad43af8899dbec3b29fbec745d6efc987a081643"} Dec 08 18:04:53 crc kubenswrapper[5118]: I1208 18:04:53.661438 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5gtms" event={"ID":"a92e71c9-7ef3-42ef-a103-b8cb38fd3ee5","Type":"ContainerStarted","Data":"b1e6b84e1a659b8120e1300979bf01d24bbf2f33d629d7a9094beb0ab8b5075e"} Dec 08 18:04:53 crc kubenswrapper[5118]: I1208 18:04:53.692318 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-5gtms" podStartSLOduration=5.278211947 podStartE2EDuration="11.69229449s" podCreationTimestamp="2025-12-08 18:04:42 +0000 UTC" firstStartedPulling="2025-12-08 18:04:44.578775938 +0000 UTC m=+1341.480100032" lastFinishedPulling="2025-12-08 18:04:50.992858451 +0000 UTC m=+1347.894182575" observedRunningTime="2025-12-08 18:04:53.688109329 +0000 UTC m=+1350.589433423" watchObservedRunningTime="2025-12-08 18:04:53.69229449 +0000 UTC m=+1350.593618624" Dec 08 18:04:56 crc kubenswrapper[5118]: I1208 18:04:56.121332 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-gctth/must-gather-5cz8j"] Dec 08 18:04:56 crc kubenswrapper[5118]: I1208 18:04:56.122303 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-must-gather-gctth/must-gather-5cz8j" podUID="736c26bc-8908-4abc-89f5-7f1d201b7e1a" containerName="copy" containerID="cri-o://ae16ad5c65263932310921ce0cb3ae2fab8bc690678ac807d442c726789b40d1" gracePeriod=2 Dec 08 18:04:56 crc kubenswrapper[5118]: I1208 18:04:56.124309 5118 status_manager.go:895] "Failed to get status for pod" podUID="736c26bc-8908-4abc-89f5-7f1d201b7e1a" pod="openshift-must-gather-gctth/must-gather-5cz8j" err="pods \"must-gather-5cz8j\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-gctth\": no relationship found between node 'crc' and this object" Dec 08 18:04:56 crc kubenswrapper[5118]: I1208 18:04:56.131778 5118 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-gctth/must-gather-5cz8j"] Dec 08 18:04:57 crc kubenswrapper[5118]: I1208 18:04:57.689501 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-gctth_must-gather-5cz8j_736c26bc-8908-4abc-89f5-7f1d201b7e1a/copy/0.log" Dec 08 18:04:57 crc kubenswrapper[5118]: I1208 18:04:57.690655 5118 generic.go:358] "Generic (PLEG): container finished" podID="736c26bc-8908-4abc-89f5-7f1d201b7e1a" containerID="ae16ad5c65263932310921ce0cb3ae2fab8bc690678ac807d442c726789b40d1" exitCode=143 Dec 08 18:04:57 crc kubenswrapper[5118]: I1208 18:04:57.875637 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-gctth_must-gather-5cz8j_736c26bc-8908-4abc-89f5-7f1d201b7e1a/copy/0.log" Dec 08 18:04:57 crc kubenswrapper[5118]: I1208 18:04:57.876516 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gctth/must-gather-5cz8j" Dec 08 18:04:57 crc kubenswrapper[5118]: I1208 18:04:57.970292 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z9qc9\" (UniqueName: \"kubernetes.io/projected/736c26bc-8908-4abc-89f5-7f1d201b7e1a-kube-api-access-z9qc9\") pod \"736c26bc-8908-4abc-89f5-7f1d201b7e1a\" (UID: \"736c26bc-8908-4abc-89f5-7f1d201b7e1a\") " Dec 08 18:04:57 crc kubenswrapper[5118]: I1208 18:04:57.970345 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/736c26bc-8908-4abc-89f5-7f1d201b7e1a-must-gather-output\") pod \"736c26bc-8908-4abc-89f5-7f1d201b7e1a\" (UID: \"736c26bc-8908-4abc-89f5-7f1d201b7e1a\") " Dec 08 18:04:57 crc kubenswrapper[5118]: I1208 18:04:57.976803 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/736c26bc-8908-4abc-89f5-7f1d201b7e1a-kube-api-access-z9qc9" (OuterVolumeSpecName: "kube-api-access-z9qc9") pod "736c26bc-8908-4abc-89f5-7f1d201b7e1a" (UID: "736c26bc-8908-4abc-89f5-7f1d201b7e1a"). InnerVolumeSpecName "kube-api-access-z9qc9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:04:58 crc kubenswrapper[5118]: I1208 18:04:58.013422 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/736c26bc-8908-4abc-89f5-7f1d201b7e1a-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "736c26bc-8908-4abc-89f5-7f1d201b7e1a" (UID: "736c26bc-8908-4abc-89f5-7f1d201b7e1a"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 18:04:58 crc kubenswrapper[5118]: I1208 18:04:58.072231 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-z9qc9\" (UniqueName: \"kubernetes.io/projected/736c26bc-8908-4abc-89f5-7f1d201b7e1a-kube-api-access-z9qc9\") on node \"crc\" DevicePath \"\"" Dec 08 18:04:58 crc kubenswrapper[5118]: I1208 18:04:58.072263 5118 reconciler_common.go:299] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/736c26bc-8908-4abc-89f5-7f1d201b7e1a-must-gather-output\") on node \"crc\" DevicePath \"\"" Dec 08 18:04:58 crc kubenswrapper[5118]: I1208 18:04:58.702156 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-gctth_must-gather-5cz8j_736c26bc-8908-4abc-89f5-7f1d201b7e1a/copy/0.log" Dec 08 18:04:58 crc kubenswrapper[5118]: I1208 18:04:58.703521 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gctth/must-gather-5cz8j" Dec 08 18:04:58 crc kubenswrapper[5118]: I1208 18:04:58.703569 5118 scope.go:117] "RemoveContainer" containerID="ae16ad5c65263932310921ce0cb3ae2fab8bc690678ac807d442c726789b40d1" Dec 08 18:04:58 crc kubenswrapper[5118]: I1208 18:04:58.723831 5118 scope.go:117] "RemoveContainer" containerID="9d3704ce8d527f3fa04596959aa3b7160c3b9f5f3211911377a6920f1de11d8b" Dec 08 18:04:59 crc kubenswrapper[5118]: I1208 18:04:59.437166 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="736c26bc-8908-4abc-89f5-7f1d201b7e1a" path="/var/lib/kubelet/pods/736c26bc-8908-4abc-89f5-7f1d201b7e1a/volumes" Dec 08 18:05:01 crc kubenswrapper[5118]: I1208 18:05:01.962421 5118 patch_prober.go:28] interesting pod/machine-config-daemon-8vxnt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 18:05:01 crc kubenswrapper[5118]: I1208 18:05:01.962822 5118 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8vxnt" podUID="cee6a3dc-47d4-4996-9c78-cb6c6b626d71" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 18:05:01 crc kubenswrapper[5118]: I1208 18:05:01.962898 5118 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8vxnt" Dec 08 18:05:01 crc kubenswrapper[5118]: I1208 18:05:01.963570 5118 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"19427898a1b36b27d54897d19b17f5f4ac1fb5469ef84f99251a664345051ae2"} pod="openshift-machine-config-operator/machine-config-daemon-8vxnt" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 08 18:05:01 crc kubenswrapper[5118]: I1208 18:05:01.963664 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8vxnt" podUID="cee6a3dc-47d4-4996-9c78-cb6c6b626d71" containerName="machine-config-daemon" containerID="cri-o://19427898a1b36b27d54897d19b17f5f4ac1fb5469ef84f99251a664345051ae2" gracePeriod=600 Dec 08 18:05:02 crc kubenswrapper[5118]: I1208 18:05:02.747658 5118 generic.go:358] "Generic (PLEG): container finished" podID="cee6a3dc-47d4-4996-9c78-cb6c6b626d71" containerID="19427898a1b36b27d54897d19b17f5f4ac1fb5469ef84f99251a664345051ae2" exitCode=0 Dec 08 18:05:02 crc kubenswrapper[5118]: I1208 18:05:02.747733 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8vxnt" event={"ID":"cee6a3dc-47d4-4996-9c78-cb6c6b626d71","Type":"ContainerDied","Data":"19427898a1b36b27d54897d19b17f5f4ac1fb5469ef84f99251a664345051ae2"} Dec 08 18:05:02 crc kubenswrapper[5118]: I1208 18:05:02.748455 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8vxnt" event={"ID":"cee6a3dc-47d4-4996-9c78-cb6c6b626d71","Type":"ContainerStarted","Data":"b41ea95f7f1da00ecb2b51a1adbb66c4fb47c4d0cae26fc3e9fe36cbbdee56d4"} Dec 08 18:05:02 crc kubenswrapper[5118]: I1208 18:05:02.748480 5118 scope.go:117] "RemoveContainer" containerID="22c5cc7c9c4c3cf08ced7571f97d2349fe0ba35b6b6efc4a95ae6a4960c893da" Dec 08 18:05:03 crc kubenswrapper[5118]: I1208 18:05:03.311512 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-5gtms" Dec 08 18:05:03 crc kubenswrapper[5118]: I1208 18:05:03.312020 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-5gtms" Dec 08 18:05:03 crc kubenswrapper[5118]: I1208 18:05:03.355774 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-5gtms" Dec 08 18:05:03 crc kubenswrapper[5118]: I1208 18:05:03.805839 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-5gtms" Dec 08 18:05:03 crc kubenswrapper[5118]: I1208 18:05:03.890680 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5gtms"] Dec 08 18:05:03 crc kubenswrapper[5118]: I1208 18:05:03.921952 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xpnf9"] Dec 08 18:05:03 crc kubenswrapper[5118]: I1208 18:05:03.922236 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-xpnf9" podUID="259174f2-efbe-4b44-ae95-b0d2f2865ab9" containerName="registry-server" containerID="cri-o://8deb65eee49f010f7f62d0bcecc121708f36ec481708b7c5befa86871164050f" gracePeriod=2 Dec 08 18:05:04 crc kubenswrapper[5118]: I1208 18:05:04.108504 5118 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-operators-xpnf9" podUID="259174f2-efbe-4b44-ae95-b0d2f2865ab9" containerName="registry-server" probeResult="failure" output="" Dec 08 18:05:04 crc kubenswrapper[5118]: E1208 18:05:04.672398 5118 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8deb65eee49f010f7f62d0bcecc121708f36ec481708b7c5befa86871164050f is running failed: container process not found" containerID="8deb65eee49f010f7f62d0bcecc121708f36ec481708b7c5befa86871164050f" cmd=["grpc_health_probe","-addr=:50051"] Dec 08 18:05:04 crc kubenswrapper[5118]: E1208 18:05:04.673080 5118 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8deb65eee49f010f7f62d0bcecc121708f36ec481708b7c5befa86871164050f is running failed: container process not found" containerID="8deb65eee49f010f7f62d0bcecc121708f36ec481708b7c5befa86871164050f" cmd=["grpc_health_probe","-addr=:50051"] Dec 08 18:05:04 crc kubenswrapper[5118]: E1208 18:05:04.673362 5118 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8deb65eee49f010f7f62d0bcecc121708f36ec481708b7c5befa86871164050f is running failed: container process not found" containerID="8deb65eee49f010f7f62d0bcecc121708f36ec481708b7c5befa86871164050f" cmd=["grpc_health_probe","-addr=:50051"] Dec 08 18:05:04 crc kubenswrapper[5118]: E1208 18:05:04.673400 5118 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8deb65eee49f010f7f62d0bcecc121708f36ec481708b7c5befa86871164050f is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-operators-xpnf9" podUID="259174f2-efbe-4b44-ae95-b0d2f2865ab9" containerName="registry-server" probeResult="unknown" Dec 08 18:05:05 crc kubenswrapper[5118]: I1208 18:05:05.506622 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xpnf9" Dec 08 18:05:05 crc kubenswrapper[5118]: I1208 18:05:05.597095 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/259174f2-efbe-4b44-ae95-b0d2f2865ab9-utilities\") pod \"259174f2-efbe-4b44-ae95-b0d2f2865ab9\" (UID: \"259174f2-efbe-4b44-ae95-b0d2f2865ab9\") " Dec 08 18:05:05 crc kubenswrapper[5118]: I1208 18:05:05.597175 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/259174f2-efbe-4b44-ae95-b0d2f2865ab9-catalog-content\") pod \"259174f2-efbe-4b44-ae95-b0d2f2865ab9\" (UID: \"259174f2-efbe-4b44-ae95-b0d2f2865ab9\") " Dec 08 18:05:05 crc kubenswrapper[5118]: I1208 18:05:05.597254 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gxpq7\" (UniqueName: \"kubernetes.io/projected/259174f2-efbe-4b44-ae95-b0d2f2865ab9-kube-api-access-gxpq7\") pod \"259174f2-efbe-4b44-ae95-b0d2f2865ab9\" (UID: \"259174f2-efbe-4b44-ae95-b0d2f2865ab9\") " Dec 08 18:05:05 crc kubenswrapper[5118]: I1208 18:05:05.598271 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/259174f2-efbe-4b44-ae95-b0d2f2865ab9-utilities" (OuterVolumeSpecName: "utilities") pod "259174f2-efbe-4b44-ae95-b0d2f2865ab9" (UID: "259174f2-efbe-4b44-ae95-b0d2f2865ab9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 18:05:05 crc kubenswrapper[5118]: I1208 18:05:05.606086 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/259174f2-efbe-4b44-ae95-b0d2f2865ab9-kube-api-access-gxpq7" (OuterVolumeSpecName: "kube-api-access-gxpq7") pod "259174f2-efbe-4b44-ae95-b0d2f2865ab9" (UID: "259174f2-efbe-4b44-ae95-b0d2f2865ab9"). InnerVolumeSpecName "kube-api-access-gxpq7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:05:05 crc kubenswrapper[5118]: I1208 18:05:05.686038 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/259174f2-efbe-4b44-ae95-b0d2f2865ab9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "259174f2-efbe-4b44-ae95-b0d2f2865ab9" (UID: "259174f2-efbe-4b44-ae95-b0d2f2865ab9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 18:05:05 crc kubenswrapper[5118]: I1208 18:05:05.699361 5118 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/259174f2-efbe-4b44-ae95-b0d2f2865ab9-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 18:05:05 crc kubenswrapper[5118]: I1208 18:05:05.699389 5118 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/259174f2-efbe-4b44-ae95-b0d2f2865ab9-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 18:05:05 crc kubenswrapper[5118]: I1208 18:05:05.699400 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-gxpq7\" (UniqueName: \"kubernetes.io/projected/259174f2-efbe-4b44-ae95-b0d2f2865ab9-kube-api-access-gxpq7\") on node \"crc\" DevicePath \"\"" Dec 08 18:05:05 crc kubenswrapper[5118]: I1208 18:05:05.772112 5118 generic.go:358] "Generic (PLEG): container finished" podID="259174f2-efbe-4b44-ae95-b0d2f2865ab9" containerID="8deb65eee49f010f7f62d0bcecc121708f36ec481708b7c5befa86871164050f" exitCode=0 Dec 08 18:05:05 crc kubenswrapper[5118]: I1208 18:05:05.772810 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xpnf9" Dec 08 18:05:05 crc kubenswrapper[5118]: I1208 18:05:05.781013 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xpnf9" event={"ID":"259174f2-efbe-4b44-ae95-b0d2f2865ab9","Type":"ContainerDied","Data":"8deb65eee49f010f7f62d0bcecc121708f36ec481708b7c5befa86871164050f"} Dec 08 18:05:05 crc kubenswrapper[5118]: I1208 18:05:05.781162 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xpnf9" event={"ID":"259174f2-efbe-4b44-ae95-b0d2f2865ab9","Type":"ContainerDied","Data":"2e369e900be26b57b9f7a1bc5cab886fe858f0af35227f2d72416c136d57cef3"} Dec 08 18:05:05 crc kubenswrapper[5118]: I1208 18:05:05.781212 5118 scope.go:117] "RemoveContainer" containerID="8deb65eee49f010f7f62d0bcecc121708f36ec481708b7c5befa86871164050f" Dec 08 18:05:05 crc kubenswrapper[5118]: I1208 18:05:05.800388 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xpnf9"] Dec 08 18:05:05 crc kubenswrapper[5118]: I1208 18:05:05.801627 5118 scope.go:117] "RemoveContainer" containerID="263fce943fcc19850127cc567b2fba9042c7eca09df2b1f475063107470ddb75" Dec 08 18:05:05 crc kubenswrapper[5118]: I1208 18:05:05.810518 5118 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-xpnf9"] Dec 08 18:05:05 crc kubenswrapper[5118]: I1208 18:05:05.835383 5118 scope.go:117] "RemoveContainer" containerID="3d0f4c8962f491a52cab1196ffbcc10e41386b8acb275de3d62e011f2c313367" Dec 08 18:05:05 crc kubenswrapper[5118]: I1208 18:05:05.856648 5118 scope.go:117] "RemoveContainer" containerID="8deb65eee49f010f7f62d0bcecc121708f36ec481708b7c5befa86871164050f" Dec 08 18:05:05 crc kubenswrapper[5118]: E1208 18:05:05.857740 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8deb65eee49f010f7f62d0bcecc121708f36ec481708b7c5befa86871164050f\": container with ID starting with 8deb65eee49f010f7f62d0bcecc121708f36ec481708b7c5befa86871164050f not found: ID does not exist" containerID="8deb65eee49f010f7f62d0bcecc121708f36ec481708b7c5befa86871164050f" Dec 08 18:05:05 crc kubenswrapper[5118]: I1208 18:05:05.858441 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8deb65eee49f010f7f62d0bcecc121708f36ec481708b7c5befa86871164050f"} err="failed to get container status \"8deb65eee49f010f7f62d0bcecc121708f36ec481708b7c5befa86871164050f\": rpc error: code = NotFound desc = could not find container \"8deb65eee49f010f7f62d0bcecc121708f36ec481708b7c5befa86871164050f\": container with ID starting with 8deb65eee49f010f7f62d0bcecc121708f36ec481708b7c5befa86871164050f not found: ID does not exist" Dec 08 18:05:05 crc kubenswrapper[5118]: I1208 18:05:05.858466 5118 scope.go:117] "RemoveContainer" containerID="263fce943fcc19850127cc567b2fba9042c7eca09df2b1f475063107470ddb75" Dec 08 18:05:05 crc kubenswrapper[5118]: E1208 18:05:05.858942 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"263fce943fcc19850127cc567b2fba9042c7eca09df2b1f475063107470ddb75\": container with ID starting with 263fce943fcc19850127cc567b2fba9042c7eca09df2b1f475063107470ddb75 not found: ID does not exist" containerID="263fce943fcc19850127cc567b2fba9042c7eca09df2b1f475063107470ddb75" Dec 08 18:05:05 crc kubenswrapper[5118]: I1208 18:05:05.858958 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"263fce943fcc19850127cc567b2fba9042c7eca09df2b1f475063107470ddb75"} err="failed to get container status \"263fce943fcc19850127cc567b2fba9042c7eca09df2b1f475063107470ddb75\": rpc error: code = NotFound desc = could not find container \"263fce943fcc19850127cc567b2fba9042c7eca09df2b1f475063107470ddb75\": container with ID starting with 263fce943fcc19850127cc567b2fba9042c7eca09df2b1f475063107470ddb75 not found: ID does not exist" Dec 08 18:05:05 crc kubenswrapper[5118]: I1208 18:05:05.858970 5118 scope.go:117] "RemoveContainer" containerID="3d0f4c8962f491a52cab1196ffbcc10e41386b8acb275de3d62e011f2c313367" Dec 08 18:05:05 crc kubenswrapper[5118]: E1208 18:05:05.859370 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d0f4c8962f491a52cab1196ffbcc10e41386b8acb275de3d62e011f2c313367\": container with ID starting with 3d0f4c8962f491a52cab1196ffbcc10e41386b8acb275de3d62e011f2c313367 not found: ID does not exist" containerID="3d0f4c8962f491a52cab1196ffbcc10e41386b8acb275de3d62e011f2c313367" Dec 08 18:05:05 crc kubenswrapper[5118]: I1208 18:05:05.859391 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d0f4c8962f491a52cab1196ffbcc10e41386b8acb275de3d62e011f2c313367"} err="failed to get container status \"3d0f4c8962f491a52cab1196ffbcc10e41386b8acb275de3d62e011f2c313367\": rpc error: code = NotFound desc = could not find container \"3d0f4c8962f491a52cab1196ffbcc10e41386b8acb275de3d62e011f2c313367\": container with ID starting with 3d0f4c8962f491a52cab1196ffbcc10e41386b8acb275de3d62e011f2c313367 not found: ID does not exist" Dec 08 18:05:07 crc kubenswrapper[5118]: I1208 18:05:07.436327 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="259174f2-efbe-4b44-ae95-b0d2f2865ab9" path="/var/lib/kubelet/pods/259174f2-efbe-4b44-ae95-b0d2f2865ab9/volumes" var/home/core/zuul-output/logs/crc-cloud-workdir-crc-all-logs.tar.gz0000644000175000000000000000005515115611561024447 0ustar coreroot‹íÁ  ÷Om7 €7šÞ'(var/home/core/zuul-output/logs/crc-cloud/0000755000175000000000000000000015115611562017365 5ustar corerootvar/home/core/zuul-output/artifacts/0000755000175000017500000000000015115606400016503 5ustar corecorevar/home/core/zuul-output/docs/0000755000175000017500000000000015115606400015453 5ustar corecore